Test Report: Docker_macOS 14079

                    
                      798c4e8fed290cfa318a9fb994a7c6f555db39c1:2022-06-01:24222
                    
                

Test fail (23/288)

x
+
TestDownloadOnly/v1.16.0/preload-exists (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
aaa_download_only_test.go:107: failed to verify preloaded tarball file exists: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/preload-exists (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/preload-exists (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/preload-exists
aaa_download_only_test.go:107: failed to verify preloaded tarball file exists: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4: no such file or directory
--- FAIL: TestDownloadOnly/v1.23.6/preload-exists (0.07s)

                                                
                                    
x
+
TestDownloadOnlyKic (2.78s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-20220601105731-16804 --force --alsologtostderr --driver=docker 
aaa_download_only_test.go:228: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-20220601105731-16804 --force --alsologtostderr --driver=docker : (2.231690023s)
aaa_download_only_test.go:236: failed to read tarball file "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4": open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4: no such file or directory
aaa_download_only_test.go:246: failed to read checksum file "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4.checksum" : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4.checksum: no such file or directory
aaa_download_only_test.go:249: failed to verify checksum. checksum of "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4" does not match remote checksum ("" != "\xd4\x1d\x8cُ\x00\xb2\x04\xe9\x80\t\x98\xec\xf8B~")
helpers_test.go:175: Cleaning up "download-docker-20220601105731-16804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-20220601105731-16804
--- FAIL: TestDownloadOnlyKic (2.78s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (251.92s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220601110427-16804 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0601 11:04:28.321378   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory
E0601 11:04:48.804276   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory
E0601 11:05:29.767648   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory
E0601 11:06:51.691949   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory
E0601 11:08:14.553174   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
E0601 11:08:14.558760   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
E0601 11:08:14.570960   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
E0601 11:08:14.593167   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
E0601 11:08:14.634030   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
E0601 11:08:14.715687   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
E0601 11:08:14.877946   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
E0601 11:08:15.198958   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
E0601 11:08:15.841281   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
E0601 11:08:17.122230   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
E0601 11:08:19.684607   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
E0601 11:08:24.805660   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
E0601 11:08:35.048328   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220601110427-16804 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m11.892945958s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-20220601110427-16804] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node ingress-addon-legacy-20220601110427-16804 in cluster ingress-addon-legacy-20220601110427-16804
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:04:27.094715   18853 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:04:27.095361   18853 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:04:27.095370   18853 out.go:309] Setting ErrFile to fd 2...
	I0601 11:04:27.095377   18853 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:04:27.095634   18853 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:04:27.096303   18853 out.go:303] Setting JSON to false
	I0601 11:04:27.111376   18853 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":5637,"bootTime":1654101030,"procs":354,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 11:04:27.111555   18853 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:04:27.133767   18853 out.go:177] * [ingress-addon-legacy-20220601110427-16804] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 11:04:27.177459   18853 notify.go:193] Checking for updates...
	I0601 11:04:27.199425   18853 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:04:27.221576   18853 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:04:27.243452   18853 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 11:04:27.265636   18853 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:04:27.287466   18853 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:04:27.309643   18853 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:04:27.382356   18853 docker.go:137] docker version: linux-20.10.14
	I0601 11:04:27.382528   18853 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:04:27.510316   18853 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:46 SystemTime:2022-06-01 18:04:27.458951412 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:04:27.554147   18853 out.go:177] * Using the docker driver based on user configuration
	I0601 11:04:27.575976   18853 start.go:284] selected driver: docker
	I0601 11:04:27.575999   18853 start.go:806] validating driver "docker" against <nil>
	I0601 11:04:27.576027   18853 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:04:27.579510   18853 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:04:27.708506   18853 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:46 SystemTime:2022-06-01 18:04:27.656009093 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:04:27.708684   18853 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 11:04:27.708837   18853 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:04:27.730734   18853 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 11:04:27.752551   18853 cni.go:95] Creating CNI manager for ""
	I0601 11:04:27.752584   18853 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:04:27.752611   18853 start_flags.go:306] config:
	{Name:ingress-addon-legacy-20220601110427-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220601110427-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:04:27.774377   18853 out.go:177] * Starting control plane node ingress-addon-legacy-20220601110427-16804 in cluster ingress-addon-legacy-20220601110427-16804
	I0601 11:04:27.817540   18853 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:04:27.839408   18853 out.go:177] * Pulling base image ...
	I0601 11:04:27.881524   18853 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0601 11:04:27.881523   18853 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:04:27.950873   18853 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 11:04:27.950900   18853 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 11:04:27.952299   18853 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0601 11:04:27.952328   18853 cache.go:57] Caching tarball of preloaded images
	I0601 11:04:27.952511   18853 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0601 11:04:27.994887   18853 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0601 11:04:28.016898   18853 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0601 11:04:28.114369   18853 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0601 11:04:30.373344   18853 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0601 11:04:30.373489   18853 preload.go:256] verifying checksumm of /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0601 11:04:30.988881   18853 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0601 11:04:30.989122   18853 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804/config.json ...
	I0601 11:04:30.989158   18853 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804/config.json: {Name:mke2fe8b19bd9a47fbb138daa6f329d0e1cbc419 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:04:30.989396   18853 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:04:30.989424   18853 start.go:352] acquiring machines lock for ingress-addon-legacy-20220601110427-16804: {Name:mk866fc3fcdbefac05c6fbe5b317a6bbe8a5c9af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:04:30.989511   18853 start.go:356] acquired machines lock for "ingress-addon-legacy-20220601110427-16804" in 78.997µs
	I0601 11:04:30.989534   18853 start.go:91] Provisioning new machine with config: &{Name:ingress-addon-legacy-20220601110427-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220601
110427-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 11:04:30.989576   18853 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:04:31.039448   18853 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0601 11:04:31.039669   18853 start.go:165] libmachine.API.Create for "ingress-addon-legacy-20220601110427-16804" (driver="docker")
	I0601 11:04:31.039691   18853 client.go:168] LocalClient.Create starting
	I0601 11:04:31.039789   18853 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem
	I0601 11:04:31.039827   18853 main.go:134] libmachine: Decoding PEM data...
	I0601 11:04:31.039840   18853 main.go:134] libmachine: Parsing certificate...
	I0601 11:04:31.039885   18853 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem
	I0601 11:04:31.039909   18853 main.go:134] libmachine: Decoding PEM data...
	I0601 11:04:31.039918   18853 main.go:134] libmachine: Parsing certificate...
	I0601 11:04:31.040331   18853 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220601110427-16804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:04:31.105037   18853 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220601110427-16804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:04:31.105131   18853 network_create.go:272] running [docker network inspect ingress-addon-legacy-20220601110427-16804] to gather additional debugging logs...
	I0601 11:04:31.105167   18853 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220601110427-16804
	W0601 11:04:31.167834   18853 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220601110427-16804 returned with exit code 1
	I0601 11:04:31.167857   18853 network_create.go:275] error running [docker network inspect ingress-addon-legacy-20220601110427-16804]: docker network inspect ingress-addon-legacy-20220601110427-16804: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-20220601110427-16804
	I0601 11:04:31.167888   18853 network_create.go:277] output of [docker network inspect ingress-addon-legacy-20220601110427-16804]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-20220601110427-16804
	
	** /stderr **
	I0601 11:04:31.167965   18853 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:04:31.232384   18853 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00070ebf8] misses:0}
	I0601 11:04:31.232422   18853 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:04:31.232439   18853 network_create.go:115] attempt to create docker network ingress-addon-legacy-20220601110427-16804 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:04:31.232518   18853 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20220601110427-16804
	I0601 11:04:31.363069   18853 network_create.go:99] docker network ingress-addon-legacy-20220601110427-16804 192.168.49.0/24 created
	I0601 11:04:31.363107   18853 kic.go:106] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-20220601110427-16804" container
	I0601 11:04:31.363231   18853 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:04:31.427626   18853 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-20220601110427-16804 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220601110427-16804 --label created_by.minikube.sigs.k8s.io=true
	I0601 11:04:31.491240   18853 oci.go:103] Successfully created a docker volume ingress-addon-legacy-20220601110427-16804
	I0601 11:04:31.491393   18853 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-20220601110427-16804-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220601110427-16804 --entrypoint /usr/bin/test -v ingress-addon-legacy-20220601110427-16804:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -d /var/lib
	I0601 11:04:31.981065   18853 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-20220601110427-16804
	I0601 11:04:31.981113   18853 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0601 11:04:31.981126   18853 kic.go:179] Starting extracting preloaded images to volume ...
	I0601 11:04:31.981242   18853 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20220601110427-16804:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir
	I0601 11:04:36.522396   18853 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20220601110427-16804:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir: (4.540841721s)
	I0601 11:04:36.522439   18853 kic.go:188] duration metric: took 4.541188 seconds to extract preloaded images to volume
	I0601 11:04:36.522536   18853 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0601 11:04:36.651131   18853 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-20220601110427-16804 --name ingress-addon-legacy-20220601110427-16804 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220601110427-16804 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-20220601110427-16804 --network ingress-addon-legacy-20220601110427-16804 --ip 192.168.49.2 --volume ingress-addon-legacy-20220601110427-16804:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a
	I0601 11:04:37.026335   18853 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220601110427-16804 --format={{.State.Running}}
	I0601 11:04:37.130147   18853 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220601110427-16804 --format={{.State.Status}}
	I0601 11:04:37.207532   18853 cli_runner.go:164] Run: docker exec ingress-addon-legacy-20220601110427-16804 stat /var/lib/dpkg/alternatives/iptables
	I0601 11:04:37.341696   18853 oci.go:247] the created container "ingress-addon-legacy-20220601110427-16804" has a running status.
	I0601 11:04:37.341724   18853 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/ingress-addon-legacy-20220601110427-16804/id_rsa...
	I0601 11:04:37.485261   18853 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/ingress-addon-legacy-20220601110427-16804/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0601 11:04:37.485321   18853 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/ingress-addon-legacy-20220601110427-16804/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0601 11:04:37.599367   18853 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220601110427-16804 --format={{.State.Status}}
	I0601 11:04:37.668282   18853 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0601 11:04:37.668300   18853 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-20220601110427-16804 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0601 11:04:37.795302   18853 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220601110427-16804 --format={{.State.Status}}
	I0601 11:04:37.863896   18853 machine.go:88] provisioning docker machine ...
	I0601 11:04:37.863937   18853 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-20220601110427-16804"
	I0601 11:04:37.864038   18853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601110427-16804
	I0601 11:04:37.933890   18853 main.go:134] libmachine: Using SSH client type: native
	I0601 11:04:37.934085   18853 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 59659 <nil> <nil>}
	I0601 11:04:37.934101   18853 main.go:134] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-20220601110427-16804 && echo "ingress-addon-legacy-20220601110427-16804" | sudo tee /etc/hostname
	I0601 11:04:38.062392   18853 main.go:134] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-20220601110427-16804
	
	I0601 11:04:38.062495   18853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601110427-16804
	I0601 11:04:38.132004   18853 main.go:134] libmachine: Using SSH client type: native
	I0601 11:04:38.132173   18853 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 59659 <nil> <nil>}
	I0601 11:04:38.132189   18853 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-20220601110427-16804' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-20220601110427-16804/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-20220601110427-16804' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 11:04:38.247834   18853 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:04:38.247858   18853 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 11:04:38.247879   18853 ubuntu.go:177] setting up certificates
	I0601 11:04:38.247888   18853 provision.go:83] configureAuth start
	I0601 11:04:38.247949   18853 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-20220601110427-16804
	I0601 11:04:38.316222   18853 provision.go:138] copyHostCerts
	I0601 11:04:38.316257   18853 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 11:04:38.316314   18853 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 11:04:38.316324   18853 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 11:04:38.316443   18853 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 11:04:38.316627   18853 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 11:04:38.316666   18853 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 11:04:38.316672   18853 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 11:04:38.316738   18853 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 11:04:38.316865   18853 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 11:04:38.316902   18853 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 11:04:38.316907   18853 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 11:04:38.316966   18853 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1675 bytes)
	I0601 11:04:38.317082   18853 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-20220601110427-16804 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-20220601110427-16804]
	I0601 11:04:38.463012   18853 provision.go:172] copyRemoteCerts
	I0601 11:04:38.463068   18853 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 11:04:38.463120   18853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601110427-16804
	I0601 11:04:38.533061   18853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59659 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/ingress-addon-legacy-20220601110427-16804/id_rsa Username:docker}
	I0601 11:04:38.620373   18853 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0601 11:04:38.620485   18853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 11:04:38.637726   18853 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0601 11:04:38.637847   18853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1294 bytes)
	I0601 11:04:38.655501   18853 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0601 11:04:38.655574   18853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0601 11:04:38.672530   18853 provision.go:86] duration metric: configureAuth took 424.620189ms
	I0601 11:04:38.672563   18853 ubuntu.go:193] setting minikube options for container-runtime
	I0601 11:04:38.672743   18853 config.go:178] Loaded profile config "ingress-addon-legacy-20220601110427-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0601 11:04:38.672799   18853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601110427-16804
	I0601 11:04:38.741822   18853 main.go:134] libmachine: Using SSH client type: native
	I0601 11:04:38.741988   18853 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 59659 <nil> <nil>}
	I0601 11:04:38.742032   18853 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 11:04:38.858204   18853 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 11:04:38.858218   18853 ubuntu.go:71] root file system type: overlay
	I0601 11:04:38.858388   18853 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 11:04:38.858466   18853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601110427-16804
	I0601 11:04:38.928123   18853 main.go:134] libmachine: Using SSH client type: native
	I0601 11:04:38.928270   18853 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 59659 <nil> <nil>}
	I0601 11:04:38.928324   18853 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 11:04:39.055894   18853 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 11:04:39.055985   18853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601110427-16804
	I0601 11:04:39.125685   18853 main.go:134] libmachine: Using SSH client type: native
	I0601 11:04:39.125840   18853 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 59659 <nil> <nil>}
	I0601 11:04:39.125855   18853 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 11:04:39.709566   18853 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 09:15:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-01 18:04:39.056086984 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0601 11:04:39.709587   18853 machine.go:91] provisioned docker machine in 1.845626254s
	I0601 11:04:39.709593   18853 client.go:171] LocalClient.Create took 8.669693491s
	I0601 11:04:39.709608   18853 start.go:173] duration metric: libmachine.API.Create for "ingress-addon-legacy-20220601110427-16804" took 8.669733535s
	I0601 11:04:39.709617   18853 start.go:306] post-start starting for "ingress-addon-legacy-20220601110427-16804" (driver="docker")
	I0601 11:04:39.709621   18853 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 11:04:39.709754   18853 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 11:04:39.709802   18853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601110427-16804
	I0601 11:04:39.783851   18853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59659 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/ingress-addon-legacy-20220601110427-16804/id_rsa Username:docker}
	I0601 11:04:39.873972   18853 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 11:04:39.877725   18853 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 11:04:39.877747   18853 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 11:04:39.877755   18853 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 11:04:39.877761   18853 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 11:04:39.877768   18853 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 11:04:39.877944   18853 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 11:04:39.878165   18853 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem -> 168042.pem in /etc/ssl/certs
	I0601 11:04:39.878171   18853 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem -> /etc/ssl/certs/168042.pem
	I0601 11:04:39.878353   18853 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 11:04:39.885543   18853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /etc/ssl/certs/168042.pem (1708 bytes)
	I0601 11:04:39.903040   18853 start.go:309] post-start completed in 193.382247ms
	I0601 11:04:39.903737   18853 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-20220601110427-16804
	I0601 11:04:39.973751   18853 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804/config.json ...
	I0601 11:04:39.974152   18853 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:04:39.974207   18853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601110427-16804
	I0601 11:04:40.044270   18853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59659 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/ingress-addon-legacy-20220601110427-16804/id_rsa Username:docker}
	I0601 11:04:40.131273   18853 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:04:40.135781   18853 start.go:134] duration metric: createHost completed in 9.145981576s
	I0601 11:04:40.135800   18853 start.go:81] releasing machines lock for "ingress-addon-legacy-20220601110427-16804", held for 9.146064751s
	I0601 11:04:40.135877   18853 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-20220601110427-16804
	I0601 11:04:40.206700   18853 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 11:04:40.206777   18853 ssh_runner.go:195] Run: systemctl --version
	I0601 11:04:40.206795   18853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601110427-16804
	I0601 11:04:40.206846   18853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601110427-16804
	I0601 11:04:40.281001   18853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59659 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/ingress-addon-legacy-20220601110427-16804/id_rsa Username:docker}
	I0601 11:04:40.281930   18853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59659 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/ingress-addon-legacy-20220601110427-16804/id_rsa Username:docker}
	I0601 11:04:40.504424   18853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 11:04:40.513383   18853 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 11:04:40.522478   18853 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 11:04:40.522524   18853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 11:04:40.531155   18853 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 11:04:40.543459   18853 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 11:04:40.606611   18853 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 11:04:40.670683   18853 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 11:04:40.680621   18853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 11:04:40.749788   18853 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 11:04:40.759674   18853 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 11:04:40.795764   18853 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 11:04:40.875544   18853 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 20.10.16 ...
	I0601 11:04:40.875798   18853 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-20220601110427-16804 dig +short host.docker.internal
	I0601 11:04:41.012921   18853 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 11:04:41.013027   18853 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 11:04:41.017268   18853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:04:41.027046   18853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601110427-16804
	I0601 11:04:41.096422   18853 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0601 11:04:41.096483   18853 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 11:04:41.126465   18853 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0601 11:04:41.126479   18853 docker.go:541] Images already preloaded, skipping extraction
	I0601 11:04:41.126554   18853 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 11:04:41.156686   18853 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0601 11:04:41.156707   18853 cache_images.go:84] Images are preloaded, skipping loading
	I0601 11:04:41.156782   18853 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 11:04:41.231234   18853 cni.go:95] Creating CNI manager for ""
	I0601 11:04:41.231248   18853 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:04:41.231266   18853 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 11:04:41.231281   18853 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-20220601110427-16804 NodeName:ingress-addon-legacy-20220601110427-16804 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:sy
stemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 11:04:41.231460   18853 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-20220601110427-16804"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 11:04:41.231560   18853 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-20220601110427-16804 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220601110427-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 11:04:41.231625   18853 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0601 11:04:41.239598   18853 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 11:04:41.239647   18853 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 11:04:41.246661   18853 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0601 11:04:41.259694   18853 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0601 11:04:41.272532   18853 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2084 bytes)
	I0601 11:04:41.285184   18853 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 11:04:41.288628   18853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:04:41.297630   18853 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804 for IP: 192.168.49.2
	I0601 11:04:41.297744   18853 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 11:04:41.297793   18853 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 11:04:41.297834   18853 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804/client.key
	I0601 11:04:41.297846   18853 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804/client.crt with IP's: []
	I0601 11:04:41.510208   18853 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804/client.crt ...
	I0601 11:04:41.510222   18853 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804/client.crt: {Name:mk4c5b02becbe6a03d03db23d862f001af5e7816 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:04:41.510562   18853 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804/client.key ...
	I0601 11:04:41.510577   18853 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804/client.key: {Name:mk96024528693ae8faa2fe62110304064d945bee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:04:41.510800   18853 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804/apiserver.key.dd3b5fb2
	I0601 11:04:41.510818   18853 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0601 11:04:41.674691   18853 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804/apiserver.crt.dd3b5fb2 ...
	I0601 11:04:41.674700   18853 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804/apiserver.crt.dd3b5fb2: {Name:mkf6ccf9dac147fef7477ee06b56715c44470e6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:04:41.674934   18853 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804/apiserver.key.dd3b5fb2 ...
	I0601 11:04:41.674942   18853 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804/apiserver.key.dd3b5fb2: {Name:mk401e68aa3e708f1839cb513cb43f0ed39fbe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:04:41.675153   18853 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804/apiserver.crt
	I0601 11:04:41.675325   18853 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804/apiserver.key
	I0601 11:04:41.675498   18853 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804/proxy-client.key
	I0601 11:04:41.675514   18853 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804/proxy-client.crt with IP's: []
	I0601 11:04:41.728924   18853 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804/proxy-client.crt ...
	I0601 11:04:41.728934   18853 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804/proxy-client.crt: {Name:mk668780f4262695f5c502deb2550351c60398be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:04:41.729140   18853 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804/proxy-client.key ...
	I0601 11:04:41.729147   18853 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804/proxy-client.key: {Name:mkc8b01b5e1bd38e8875bb7d2f1895f211400f08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:04:41.729320   18853 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0601 11:04:41.729348   18853 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0601 11:04:41.729368   18853 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0601 11:04:41.729385   18853 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0601 11:04:41.729401   18853 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0601 11:04:41.729416   18853 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0601 11:04:41.729432   18853 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0601 11:04:41.729454   18853 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0601 11:04:41.729549   18853 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem (1338 bytes)
	W0601 11:04:41.729589   18853 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804_empty.pem, impossibly tiny 0 bytes
	I0601 11:04:41.729597   18853 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1675 bytes)
	I0601 11:04:41.729625   18853 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 11:04:41.729653   18853 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 11:04:41.729683   18853 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1675 bytes)
	I0601 11:04:41.729747   18853 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem (1708 bytes)
	I0601 11:04:41.729785   18853 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem -> /usr/share/ca-certificates/16804.pem
	I0601 11:04:41.729803   18853 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem -> /usr/share/ca-certificates/168042.pem
	I0601 11:04:41.729819   18853 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:04:41.730277   18853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 11:04:41.748819   18853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0601 11:04:41.765382   18853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 11:04:41.782251   18853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601110427-16804/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 11:04:41.799582   18853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 11:04:41.816016   18853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 11:04:41.833130   18853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 11:04:41.850702   18853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0601 11:04:41.867574   18853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem --> /usr/share/ca-certificates/16804.pem (1338 bytes)
	I0601 11:04:41.884986   18853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /usr/share/ca-certificates/168042.pem (1708 bytes)
	I0601 11:04:41.902683   18853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 11:04:41.919310   18853 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 11:04:41.932122   18853 ssh_runner.go:195] Run: openssl version
	I0601 11:04:41.937539   18853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16804.pem && ln -fs /usr/share/ca-certificates/16804.pem /etc/ssl/certs/16804.pem"
	I0601 11:04:41.945303   18853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16804.pem
	I0601 11:04:41.949457   18853 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 18:01 /usr/share/ca-certificates/16804.pem
	I0601 11:04:41.949495   18853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16804.pem
	I0601 11:04:41.954704   18853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16804.pem /etc/ssl/certs/51391683.0"
	I0601 11:04:41.962493   18853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168042.pem && ln -fs /usr/share/ca-certificates/168042.pem /etc/ssl/certs/168042.pem"
	I0601 11:04:41.970571   18853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168042.pem
	I0601 11:04:41.974216   18853 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 18:01 /usr/share/ca-certificates/168042.pem
	I0601 11:04:41.974251   18853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168042.pem
	I0601 11:04:41.979562   18853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168042.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 11:04:41.987169   18853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 11:04:41.994628   18853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:04:41.998353   18853 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:04:41.998398   18853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:04:42.003173   18853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 11:04:42.010833   18853 kubeadm.go:395] StartCluster: {Name:ingress-addon-legacy-20220601110427-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220601110427-16804 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:04:42.010973   18853 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 11:04:42.040286   18853 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 11:04:42.047933   18853 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:04:42.055144   18853 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:04:42.055201   18853 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:04:42.062610   18853 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:04:42.062679   18853 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:06:39.724274   18853 out.go:204]   - Generating certificates and keys ...
	I0601 11:06:39.768023   18853 out.go:204]   - Booting up control plane ...
	W0601 11:06:39.771624   18853 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-20220601110427-16804 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-20220601110427-16804 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0601 18:04:42.114949     828 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0601 18:04:44.769859     828 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0601 18:04:44.770912     828 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-20220601110427-16804 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-20220601110427-16804 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0601 18:04:42.114949     828 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0601 18:04:44.769859     828 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0601 18:04:44.770912     828 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0601 11:06:39.771661   18853 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 11:06:40.198111   18853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:06:40.207231   18853 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:06:40.207285   18853 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:06:40.215311   18853 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:06:40.215332   18853 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:06:40.940149   18853 out.go:204]   - Generating certificates and keys ...
	I0601 11:06:41.393423   18853 out.go:204]   - Booting up control plane ...
	I0601 11:08:36.368998   18853 kubeadm.go:397] StartCluster complete in 3m54.297200359s
	I0601 11:08:36.369071   18853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:08:36.397967   18853 logs.go:274] 0 containers: []
	W0601 11:08:36.397980   18853 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:08:36.398032   18853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:08:36.426397   18853 logs.go:274] 0 containers: []
	W0601 11:08:36.426410   18853 logs.go:276] No container was found matching "etcd"
	I0601 11:08:36.426465   18853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:08:36.454167   18853 logs.go:274] 0 containers: []
	W0601 11:08:36.454179   18853 logs.go:276] No container was found matching "coredns"
	I0601 11:08:36.454251   18853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:08:36.481660   18853 logs.go:274] 0 containers: []
	W0601 11:08:36.481672   18853 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:08:36.481729   18853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:08:36.510440   18853 logs.go:274] 0 containers: []
	W0601 11:08:36.510459   18853 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:08:36.510512   18853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:08:36.539156   18853 logs.go:274] 0 containers: []
	W0601 11:08:36.539172   18853 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:08:36.539231   18853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:08:36.567968   18853 logs.go:274] 0 containers: []
	W0601 11:08:36.567980   18853 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:08:36.568051   18853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:08:36.598090   18853 logs.go:274] 0 containers: []
	W0601 11:08:36.598105   18853 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:08:36.598112   18853 logs.go:123] Gathering logs for kubelet ...
	I0601 11:08:36.598118   18853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:08:36.639877   18853 logs.go:123] Gathering logs for dmesg ...
	I0601 11:08:36.639890   18853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:08:36.651858   18853 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:08:36.651870   18853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:08:36.705453   18853 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:08:36.705468   18853 logs.go:123] Gathering logs for Docker ...
	I0601 11:08:36.705477   18853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:08:36.722596   18853 logs.go:123] Gathering logs for container status ...
	I0601 11:08:36.722610   18853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:08:38.782929   18853 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060246213s)
	W0601 11:08:38.783048   18853 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0601 18:06:40.264792    3314 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0601 18:06:41.381551    3314 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0601 18:06:41.382683    3314 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0601 11:08:38.783067   18853 out.go:239] * 
	* 
	W0601 11:08:38.783191   18853 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0601 18:06:40.264792    3314 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0601 18:06:41.381551    3314 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0601 18:06:41.382683    3314 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0601 18:06:40.264792    3314 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0601 18:06:41.381551    3314 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0601 18:06:41.382683    3314 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0601 11:08:38.783208   18853 out.go:239] * 
	* 
	W0601 11:08:38.783758   18853 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:08:38.847839   18853 out.go:177] 
	W0601 11:08:38.912946   18853 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0601 18:06:40.264792    3314 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0601 18:06:41.381551    3314 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0601 18:06:41.382683    3314 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0601 18:06:40.264792    3314 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0601 18:06:41.381551    3314 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0601 18:06:41.382683    3314 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0601 11:08:38.913129   18853 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0601 11:08:38.913203   18853 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0601 11:08:38.934811   18853 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220601110427-16804 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (251.92s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.59s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220601110427-16804 addons enable ingress --alsologtostderr -v=5
E0601 11:08:55.588941   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
E0601 11:09:07.898303   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory
E0601 11:09:35.592486   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory
E0601 11:09:36.552564   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-20220601110427-16804 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m29.090947265s)

                                                
                                                
-- stdout --
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:08:39.077536   18977 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:08:39.077758   18977 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:08:39.077764   18977 out.go:309] Setting ErrFile to fd 2...
	I0601 11:08:39.077767   18977 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:08:39.077864   18977 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:08:39.078323   18977 config.go:178] Loaded profile config "ingress-addon-legacy-20220601110427-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0601 11:08:39.078339   18977 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-20220601110427-16804"
	I0601 11:08:39.078346   18977 addons.go:153] Setting addon ingress=true in "ingress-addon-legacy-20220601110427-16804"
	I0601 11:08:39.078624   18977 host.go:66] Checking if "ingress-addon-legacy-20220601110427-16804" exists ...
	I0601 11:08:39.079119   18977 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220601110427-16804 --format={{.State.Status}}
	I0601 11:08:39.167498   18977 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0601 11:08:39.189918   18977 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0601 11:08:39.211545   18977 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0601 11:08:39.233303   18977 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	I0601 11:08:39.255471   18977 addons.go:348] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0601 11:08:39.255509   18977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15118 bytes)
	I0601 11:08:39.255635   18977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601110427-16804
	I0601 11:08:39.323510   18977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59659 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/ingress-addon-legacy-20220601110427-16804/id_rsa Username:docker}
	I0601 11:08:39.415361   18977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0601 11:08:39.467937   18977 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:08:39.467976   18977 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:08:39.744383   18977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0601 11:08:39.796644   18977 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:08:39.796661   18977 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:08:40.338739   18977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0601 11:08:40.391214   18977 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:08:40.391232   18977 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:08:41.048555   18977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0601 11:08:41.102972   18977 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:08:41.102996   18977 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:08:41.894898   18977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0601 11:08:41.949549   18977 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:08:41.949564   18977 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:08:43.120101   18977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0601 11:08:43.171710   18977 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:08:43.171725   18977 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:08:45.425306   18977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0601 11:08:45.476292   18977 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:08:45.476308   18977 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:08:47.088038   18977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0601 11:08:47.139478   18977 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:08:47.139494   18977 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:08:49.944201   18977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0601 11:08:49.998273   18977 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:08:49.998294   18977 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:08:53.823661   18977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0601 11:08:53.875885   18977 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:08:53.875899   18977 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:09:01.574147   18977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0601 11:09:01.625445   18977 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:09:01.625459   18977 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:09:16.263763   18977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0601 11:09:16.317630   18977 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:09:16.317646   18977 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:09:44.727438   18977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0601 11:09:44.778783   18977 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:09:44.778799   18977 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:10:07.948036   18977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0601 11:10:08.000196   18977 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:10:08.000224   18977 addons.go:386] Verifying addon ingress=true in "ingress-addon-legacy-20220601110427-16804"
	I0601 11:10:08.022332   18977 out.go:177] * Verifying ingress addon...
	I0601 11:10:08.044726   18977 out.go:177] 
	W0601 11:10:08.065850   18977 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20220601110427-16804" does not exist: client config: context "ingress-addon-legacy-20220601110427-16804" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20220601110427-16804" does not exist: client config: context "ingress-addon-legacy-20220601110427-16804" does not exist]
	W0601 11:10:08.065883   18977 out.go:239] * 
	* 
	W0601 11:10:08.069783   18977 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:10:08.090833   18977 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220601110427-16804
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20220601110427-16804:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "54958a790074f0cdc8bce9ee6d1a9fc50b7faf5d7527e2272aabb7d62995e6d5",
	        "Created": "2022-06-01T18:04:36.718886401Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 28739,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T18:04:37.019610446Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/54958a790074f0cdc8bce9ee6d1a9fc50b7faf5d7527e2272aabb7d62995e6d5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/54958a790074f0cdc8bce9ee6d1a9fc50b7faf5d7527e2272aabb7d62995e6d5/hostname",
	        "HostsPath": "/var/lib/docker/containers/54958a790074f0cdc8bce9ee6d1a9fc50b7faf5d7527e2272aabb7d62995e6d5/hosts",
	        "LogPath": "/var/lib/docker/containers/54958a790074f0cdc8bce9ee6d1a9fc50b7faf5d7527e2272aabb7d62995e6d5/54958a790074f0cdc8bce9ee6d1a9fc50b7faf5d7527e2272aabb7d62995e6d5-json.log",
	        "Name": "/ingress-addon-legacy-20220601110427-16804",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-20220601110427-16804:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-20220601110427-16804",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bca907c2df01aefa952a3c64037b00636bbdbcfab7870503a5df37e521407756-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bca907c2df01aefa952a3c64037b00636bbdbcfab7870503a5df37e521407756/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bca907c2df01aefa952a3c64037b00636bbdbcfab7870503a5df37e521407756/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bca907c2df01aefa952a3c64037b00636bbdbcfab7870503a5df37e521407756/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-20220601110427-16804",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-20220601110427-16804/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-20220601110427-16804",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-20220601110427-16804",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-20220601110427-16804",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dd3d5355db357c7b2ae5bddfc999222659f46dceba14863ca02e3bf654cafa4f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59659"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59660"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59661"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59662"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59663"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/dd3d5355db35",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-20220601110427-16804": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "54958a790074",
	                        "ingress-addon-legacy-20220601110427-16804"
	                    ],
	                    "NetworkID": "3d465fa6d8d1c36520c057f405ec588569fff74b80b815e68f005b06e556c092",
	                    "EndpointID": "ee49746fbdec091e79d7ee94f2b7a6f1b9a1f4f4ee6af46c851b546bb6f3aaf8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220601110427-16804 -n ingress-addon-legacy-20220601110427-16804
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220601110427-16804 -n ingress-addon-legacy-20220601110427-16804: exit status 6 (426.412129ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:10:08.609294   18993 status.go:413] kubeconfig endpoint: extract IP: "ingress-addon-legacy-20220601110427-16804" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220601110427-16804" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.59s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220601110427-16804 addons enable ingress-dns --alsologtostderr -v=5
E0601 11:10:58.476057   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-20220601110427-16804 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m29.011451654s)

                                                
                                                
-- stdout --
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:10:08.667352   19003 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:10:08.667608   19003 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:10:08.667613   19003 out.go:309] Setting ErrFile to fd 2...
	I0601 11:10:08.667617   19003 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:10:08.667710   19003 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:10:08.668144   19003 config.go:178] Loaded profile config "ingress-addon-legacy-20220601110427-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0601 11:10:08.668159   19003 addons.go:65] Setting ingress-dns=true in profile "ingress-addon-legacy-20220601110427-16804"
	I0601 11:10:08.668169   19003 addons.go:153] Setting addon ingress-dns=true in "ingress-addon-legacy-20220601110427-16804"
	I0601 11:10:08.668418   19003 host.go:66] Checking if "ingress-addon-legacy-20220601110427-16804" exists ...
	I0601 11:10:08.668879   19003 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220601110427-16804 --format={{.State.Status}}
	I0601 11:10:08.755917   19003 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0601 11:10:08.777724   19003 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0601 11:10:08.800925   19003 addons.go:348] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0601 11:10:08.800976   19003 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0601 11:10:08.801161   19003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220601110427-16804
	I0601 11:10:08.869751   19003 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59659 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/ingress-addon-legacy-20220601110427-16804/id_rsa Username:docker}
	I0601 11:10:08.960108   19003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0601 11:10:09.010312   19003 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:10:09.010333   19003 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:10:09.287745   19003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0601 11:10:09.339647   19003 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:10:09.339662   19003 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:10:09.880057   19003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0601 11:10:09.931349   19003 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:10:09.931363   19003 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:10:10.587485   19003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0601 11:10:10.639022   19003 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:10:10.639040   19003 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:10:11.432537   19003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0601 11:10:11.485143   19003 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:10:11.485156   19003 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:10:12.656732   19003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0601 11:10:12.709820   19003 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:10:12.709848   19003 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:10:14.964197   19003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0601 11:10:15.013187   19003 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:10:15.013201   19003 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:10:16.626336   19003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0601 11:10:16.678597   19003 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:10:16.678612   19003 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:10:19.484032   19003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0601 11:10:19.535806   19003 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:10:19.535820   19003 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:10:23.362017   19003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0601 11:10:23.414044   19003 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:10:23.414058   19003 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:10:31.114080   19003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0601 11:10:31.165757   19003 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:10:31.165773   19003 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:10:45.804139   19003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0601 11:10:45.856262   19003 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:10:45.856284   19003 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:11:14.266136   19003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0601 11:11:14.317395   19003 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:11:14.317411   19003 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:11:37.488637   19003 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0601 11:11:37.540413   19003 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0601 11:11:37.562362   19003 out.go:177] 
	W0601 11:11:37.584273   19003 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0601 11:11:37.584299   19003 out.go:239] * 
	* 
	W0601 11:11:37.588288   19003 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:11:37.609256   19003 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220601110427-16804
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20220601110427-16804:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "54958a790074f0cdc8bce9ee6d1a9fc50b7faf5d7527e2272aabb7d62995e6d5",
	        "Created": "2022-06-01T18:04:36.718886401Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 28739,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T18:04:37.019610446Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/54958a790074f0cdc8bce9ee6d1a9fc50b7faf5d7527e2272aabb7d62995e6d5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/54958a790074f0cdc8bce9ee6d1a9fc50b7faf5d7527e2272aabb7d62995e6d5/hostname",
	        "HostsPath": "/var/lib/docker/containers/54958a790074f0cdc8bce9ee6d1a9fc50b7faf5d7527e2272aabb7d62995e6d5/hosts",
	        "LogPath": "/var/lib/docker/containers/54958a790074f0cdc8bce9ee6d1a9fc50b7faf5d7527e2272aabb7d62995e6d5/54958a790074f0cdc8bce9ee6d1a9fc50b7faf5d7527e2272aabb7d62995e6d5-json.log",
	        "Name": "/ingress-addon-legacy-20220601110427-16804",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-20220601110427-16804:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-20220601110427-16804",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bca907c2df01aefa952a3c64037b00636bbdbcfab7870503a5df37e521407756-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bca907c2df01aefa952a3c64037b00636bbdbcfab7870503a5df37e521407756/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bca907c2df01aefa952a3c64037b00636bbdbcfab7870503a5df37e521407756/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bca907c2df01aefa952a3c64037b00636bbdbcfab7870503a5df37e521407756/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-20220601110427-16804",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-20220601110427-16804/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-20220601110427-16804",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-20220601110427-16804",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-20220601110427-16804",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dd3d5355db357c7b2ae5bddfc999222659f46dceba14863ca02e3bf654cafa4f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59659"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59660"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59661"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59662"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59663"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/dd3d5355db35",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-20220601110427-16804": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "54958a790074",
	                        "ingress-addon-legacy-20220601110427-16804"
	                    ],
	                    "NetworkID": "3d465fa6d8d1c36520c057f405ec588569fff74b80b815e68f005b06e556c092",
	                    "EndpointID": "ee49746fbdec091e79d7ee94f2b7a6f1b9a1f4f4ee6af46c851b546bb6f3aaf8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220601110427-16804 -n ingress-addon-legacy-20220601110427-16804
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220601110427-16804 -n ingress-addon-legacy-20220601110427-16804: exit status 6 (449.200409ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:11:38.141824   19017 status.go:413] kubeconfig endpoint: extract IP: "ingress-addon-legacy-20220601110427-16804" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220601110427-16804" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.53s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.5s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:156: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220601110427-16804
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20220601110427-16804:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "54958a790074f0cdc8bce9ee6d1a9fc50b7faf5d7527e2272aabb7d62995e6d5",
	        "Created": "2022-06-01T18:04:36.718886401Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 28739,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T18:04:37.019610446Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/54958a790074f0cdc8bce9ee6d1a9fc50b7faf5d7527e2272aabb7d62995e6d5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/54958a790074f0cdc8bce9ee6d1a9fc50b7faf5d7527e2272aabb7d62995e6d5/hostname",
	        "HostsPath": "/var/lib/docker/containers/54958a790074f0cdc8bce9ee6d1a9fc50b7faf5d7527e2272aabb7d62995e6d5/hosts",
	        "LogPath": "/var/lib/docker/containers/54958a790074f0cdc8bce9ee6d1a9fc50b7faf5d7527e2272aabb7d62995e6d5/54958a790074f0cdc8bce9ee6d1a9fc50b7faf5d7527e2272aabb7d62995e6d5-json.log",
	        "Name": "/ingress-addon-legacy-20220601110427-16804",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-20220601110427-16804:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-20220601110427-16804",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bca907c2df01aefa952a3c64037b00636bbdbcfab7870503a5df37e521407756-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bca907c2df01aefa952a3c64037b00636bbdbcfab7870503a5df37e521407756/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bca907c2df01aefa952a3c64037b00636bbdbcfab7870503a5df37e521407756/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bca907c2df01aefa952a3c64037b00636bbdbcfab7870503a5df37e521407756/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-20220601110427-16804",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-20220601110427-16804/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-20220601110427-16804",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-20220601110427-16804",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-20220601110427-16804",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dd3d5355db357c7b2ae5bddfc999222659f46dceba14863ca02e3bf654cafa4f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59659"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59660"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59661"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59662"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59663"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/dd3d5355db35",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-20220601110427-16804": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "54958a790074",
	                        "ingress-addon-legacy-20220601110427-16804"
	                    ],
	                    "NetworkID": "3d465fa6d8d1c36520c057f405ec588569fff74b80b815e68f005b06e556c092",
	                    "EndpointID": "ee49746fbdec091e79d7ee94f2b7a6f1b9a1f4f4ee6af46c851b546bb6f3aaf8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220601110427-16804 -n ingress-addon-legacy-20220601110427-16804
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220601110427-16804 -n ingress-addon-legacy-20220601110427-16804: exit status 6 (427.125911ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:11:38.640706   19029 status.go:413] kubeconfig endpoint: extract IP: "ingress-addon-legacy-20220601110427-16804" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220601110427-16804" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.50s)

                                                
                                    
x
+
TestPreload (264.92s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-20220601112248-16804 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0
E0601 11:23:14.634240   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
E0601 11:24:07.923269   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory
E0601 11:24:37.702531   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
preload_test.go:48: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p test-preload-20220601112248-16804 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0: exit status 109 (4m21.799693153s)

                                                
                                                
-- stdout --
	* [test-preload-20220601112248-16804] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node test-preload-20220601112248-16804 in cluster test-preload-20220601112248-16804
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.17.0 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:22:48.804604   22060 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:22:48.804838   22060 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:22:48.804843   22060 out.go:309] Setting ErrFile to fd 2...
	I0601 11:22:48.804847   22060 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:22:48.804952   22060 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:22:48.805276   22060 out.go:303] Setting JSON to false
	I0601 11:22:48.820201   22060 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":6738,"bootTime":1654101030,"procs":347,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 11:22:48.820318   22060 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:22:48.842441   22060 out.go:177] * [test-preload-20220601112248-16804] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 11:22:48.864375   22060 notify.go:193] Checking for updates...
	I0601 11:22:48.886157   22060 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:22:48.907860   22060 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:22:48.929262   22060 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 11:22:48.950255   22060 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:22:48.972163   22060 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:22:48.993460   22060 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:22:49.068233   22060 docker.go:137] docker version: linux-20.10.14
	I0601 11:22:49.068339   22060 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:22:49.197078   22060 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:46 SystemTime:2022-06-01 18:22:49.138337582 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:22:49.240501   22060 out.go:177] * Using the docker driver based on user configuration
	I0601 11:22:49.261809   22060 start.go:284] selected driver: docker
	I0601 11:22:49.261837   22060 start.go:806] validating driver "docker" against <nil>
	I0601 11:22:49.261864   22060 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:22:49.265301   22060 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:22:49.400311   22060 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:46 SystemTime:2022-06-01 18:22:49.341542496 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:22:49.400431   22060 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 11:22:49.400615   22060 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:22:49.422459   22060 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 11:22:49.444072   22060 cni.go:95] Creating CNI manager for ""
	I0601 11:22:49.444116   22060 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:22:49.444129   22060 start_flags.go:306] config:
	{Name:test-preload-20220601112248-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220601112248-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:22:49.466154   22060 out.go:177] * Starting control plane node test-preload-20220601112248-16804 in cluster test-preload-20220601112248-16804
	I0601 11:22:49.508096   22060 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:22:49.529284   22060 out.go:177] * Pulling base image ...
	I0601 11:22:49.571061   22060 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0601 11:22:49.571076   22060 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:22:49.572295   22060 cache.go:107] acquiring lock: {Name:mkc2b7687b85bf181ba92b38734b383aabb3f5f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:22:49.571403   22060 cache.go:107] acquiring lock: {Name:mk3f2d0f507e29cac613426c429959a8e7117fcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:22:49.573158   22060 cache.go:107] acquiring lock: {Name:mkc827abcdac35b7b58aa48e02bb25639e446102 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:22:49.573287   22060 cache.go:107] acquiring lock: {Name:mk20625b3b42652fa2b97770f3cffe50031cfe8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:22:49.573285   22060 cache.go:107] acquiring lock: {Name:mkc20378ceb6d59e990f48d737a0943fdfb8bd40 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:22:49.573368   22060 cache.go:107] acquiring lock: {Name:mk69a26b32a92db2000ffeb5e2a0c30929c75e95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:22:49.573717   22060 cache.go:107] acquiring lock: {Name:mk7b151ed5d910f644def4b771d561ddb39e1ef3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:22:49.574545   22060 cache.go:107] acquiring lock: {Name:mk80b47221235a7ef1d2d5f5b435025050eb6224 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:22:49.574619   22060 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0601 11:22:49.574639   22060 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0601 11:22:49.574644   22060 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0601 11:22:49.574659   22060 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 3.275731ms
	I0601 11:22:49.574723   22060 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0601 11:22:49.574823   22060 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0601 11:22:49.574840   22060 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
	I0601 11:22:49.574849   22060 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0601 11:22:49.574948   22060 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0601 11:22:49.574952   22060 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0601 11:22:49.575012   22060 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601112248-16804/config.json ...
	I0601 11:22:49.575042   22060 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601112248-16804/config.json: {Name:mk6aeb50d0a2c705817b9fa7fdc63120b249ad5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:22:49.582327   22060 image.go:180] daemon lookup for k8s.gcr.io/pause:3.1: Error response from daemon: reference does not exist
	I0601 11:22:49.583238   22060 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.0: Error response from daemon: reference does not exist
	I0601 11:22:49.583725   22060 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.0: Error response from daemon: reference does not exist
	I0601 11:22:49.583956   22060 image.go:180] daemon lookup for k8s.gcr.io/coredns:1.6.5: Error response from daemon: reference does not exist
	I0601 11:22:49.584814   22060 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.0: Error response from daemon: reference does not exist
	I0601 11:22:49.584847   22060 image.go:180] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error response from daemon: reference does not exist
	I0601 11:22:49.584882   22060 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.0: Error response from daemon: reference does not exist
	I0601 11:22:49.646290   22060 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 11:22:49.646312   22060 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 11:22:49.646325   22060 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:22:49.646364   22060 start.go:352] acquiring machines lock for test-preload-20220601112248-16804: {Name:mk9e95fcaf919cec95a1273133ea07107d87c119 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:22:49.646495   22060 start.go:356] acquired machines lock for "test-preload-20220601112248-16804" in 119.954µs
	I0601 11:22:49.646519   22060 start.go:91] Provisioning new machine with config: &{Name:test-preload-20220601112248-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220601112248-16804 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 11:22:49.646620   22060 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:22:49.668702   22060 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:22:49.668989   22060 start.go:165] libmachine.API.Create for "test-preload-20220601112248-16804" (driver="docker")
	I0601 11:22:49.669017   22060 client.go:168] LocalClient.Create starting
	I0601 11:22:49.669089   22060 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem
	I0601 11:22:49.669124   22060 main.go:134] libmachine: Decoding PEM data...
	I0601 11:22:49.669138   22060 main.go:134] libmachine: Parsing certificate...
	I0601 11:22:49.669195   22060 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem
	I0601 11:22:49.669218   22060 main.go:134] libmachine: Decoding PEM data...
	I0601 11:22:49.669229   22060 main.go:134] libmachine: Parsing certificate...
	I0601 11:22:49.669656   22060 cli_runner.go:164] Run: docker network inspect test-preload-20220601112248-16804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:22:49.737055   22060 cli_runner.go:211] docker network inspect test-preload-20220601112248-16804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:22:49.737125   22060 network_create.go:272] running [docker network inspect test-preload-20220601112248-16804] to gather additional debugging logs...
	I0601 11:22:49.737140   22060 cli_runner.go:164] Run: docker network inspect test-preload-20220601112248-16804
	W0601 11:22:49.802052   22060 cli_runner.go:211] docker network inspect test-preload-20220601112248-16804 returned with exit code 1
	I0601 11:22:49.802070   22060 network_create.go:275] error running [docker network inspect test-preload-20220601112248-16804]: docker network inspect test-preload-20220601112248-16804: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: test-preload-20220601112248-16804
	I0601 11:22:49.802089   22060 network_create.go:277] output of [docker network inspect test-preload-20220601112248-16804]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: test-preload-20220601112248-16804
	
	** /stderr **
	I0601 11:22:49.802142   22060 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:22:49.866082   22060 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00110a2f0] misses:0}
	I0601 11:22:49.866114   22060 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:22:49.866128   22060 network_create.go:115] attempt to create docker network test-preload-20220601112248-16804 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:22:49.866189   22060 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20220601112248-16804
	I0601 11:22:49.964186   22060 network_create.go:99] docker network test-preload-20220601112248-16804 192.168.49.0/24 created
	I0601 11:22:49.964227   22060 kic.go:106] calculated static IP "192.168.49.2" for the "test-preload-20220601112248-16804" container
	I0601 11:22:49.964298   22060 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:22:50.029637   22060 cli_runner.go:164] Run: docker volume create test-preload-20220601112248-16804 --label name.minikube.sigs.k8s.io=test-preload-20220601112248-16804 --label created_by.minikube.sigs.k8s.io=true
	I0601 11:22:50.082233   22060 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0
	I0601 11:22:50.082335   22060 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0601 11:22:50.095112   22060 oci.go:103] Successfully created a docker volume test-preload-20220601112248-16804
	I0601 11:22:50.095182   22060 cli_runner.go:164] Run: docker run --rm --name test-preload-20220601112248-16804-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-20220601112248-16804 --entrypoint /usr/bin/test -v test-preload-20220601112248-16804:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -d /var/lib
	I0601 11:22:50.099715   22060 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0
	I0601 11:22:50.115679   22060 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5
	I0601 11:22:50.117337   22060 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0
	I0601 11:22:50.128454   22060 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0
	I0601 11:22:50.162023   22060 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0
	I0601 11:22:50.163278   22060 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 exists
	I0601 11:22:50.163293   22060 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1" took 590.316946ms
	I0601 11:22:50.163304   22060 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 succeeded
	I0601 11:22:50.519983   22060 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 exists
	I0601 11:22:50.520007   22060 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5" took 946.863215ms
	I0601 11:22:50.520025   22060 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 succeeded
	I0601 11:22:50.593846   22060 oci.go:107] Successfully prepared a docker volume test-preload-20220601112248-16804
	I0601 11:22:50.593870   22060 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0601 11:22:50.593936   22060 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0601 11:22:50.730789   22060 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname test-preload-20220601112248-16804 --name test-preload-20220601112248-16804 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-20220601112248-16804 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=test-preload-20220601112248-16804 --network test-preload-20220601112248-16804 --ip 192.168.49.2 --volume test-preload-20220601112248-16804:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a
	I0601 11:22:50.821742   22060 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 exists
	I0601 11:22:50.821768   22060 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0" took 1.248637182s
	I0601 11:22:50.821777   22060 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 succeeded
	I0601 11:22:50.891264   22060 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 exists
	I0601 11:22:50.891289   22060 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0" took 1.319858609s
	I0601 11:22:50.891299   22060 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 succeeded
	I0601 11:22:50.898604   22060 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 exists
	I0601 11:22:50.898621   22060 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0" took 1.325602134s
	I0601 11:22:50.898631   22060 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 succeeded
	I0601 11:22:51.137927   22060 cli_runner.go:164] Run: docker container inspect test-preload-20220601112248-16804 --format={{.State.Running}}
	I0601 11:22:51.140175   22060 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 exists
	I0601 11:22:51.140203   22060 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0" took 1.567461341s
	I0601 11:22:51.140231   22060 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 succeeded
	I0601 11:22:51.212516   22060 cli_runner.go:164] Run: docker container inspect test-preload-20220601112248-16804 --format={{.State.Status}}
	I0601 11:22:51.291784   22060 cli_runner.go:164] Run: docker exec test-preload-20220601112248-16804 stat /var/lib/dpkg/alternatives/iptables
	I0601 11:22:51.512892   22060 oci.go:247] the created container "test-preload-20220601112248-16804" has a running status.
	I0601 11:22:51.512919   22060 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/test-preload-20220601112248-16804/id_rsa...
	I0601 11:22:51.708408   22060 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/test-preload-20220601112248-16804/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0601 11:22:51.822622   22060 cli_runner.go:164] Run: docker container inspect test-preload-20220601112248-16804 --format={{.State.Status}}
	I0601 11:22:51.893628   22060 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0601 11:22:51.893647   22060 kic_runner.go:114] Args: [docker exec --privileged test-preload-20220601112248-16804 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0601 11:22:52.018742   22060 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 exists
	I0601 11:22:52.018763   22060 cache.go:96] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0" took 2.445514682s
	I0601 11:22:52.018795   22060 cache.go:80] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 succeeded
	I0601 11:22:52.018812   22060 cache.go:87] Successfully saved all images to host disk.
	I0601 11:22:52.025978   22060 cli_runner.go:164] Run: docker container inspect test-preload-20220601112248-16804 --format={{.State.Status}}
	I0601 11:22:52.094043   22060 machine.go:88] provisioning docker machine ...
	I0601 11:22:52.094301   22060 ubuntu.go:169] provisioning hostname "test-preload-20220601112248-16804"
	I0601 11:22:52.094393   22060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601112248-16804
	I0601 11:22:52.165400   22060 main.go:134] libmachine: Using SSH client type: native
	I0601 11:22:52.165593   22060 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 65494 <nil> <nil>}
	I0601 11:22:52.165609   22060 main.go:134] libmachine: About to run SSH command:
	sudo hostname test-preload-20220601112248-16804 && echo "test-preload-20220601112248-16804" | sudo tee /etc/hostname
	I0601 11:22:52.290551   22060 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-20220601112248-16804
	
	I0601 11:22:52.290621   22060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601112248-16804
	I0601 11:22:52.369684   22060 main.go:134] libmachine: Using SSH client type: native
	I0601 11:22:52.369850   22060 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 65494 <nil> <nil>}
	I0601 11:22:52.369865   22060 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-20220601112248-16804' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-20220601112248-16804/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-20220601112248-16804' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 11:22:52.490993   22060 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:22:52.491011   22060 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 11:22:52.491043   22060 ubuntu.go:177] setting up certificates
	I0601 11:22:52.491051   22060 provision.go:83] configureAuth start
	I0601 11:22:52.491142   22060 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20220601112248-16804
	I0601 11:22:52.560101   22060 provision.go:138] copyHostCerts
	I0601 11:22:52.560204   22060 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 11:22:52.560212   22060 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 11:22:52.560300   22060 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 11:22:52.560519   22060 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 11:22:52.560532   22060 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 11:22:52.560592   22060 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 11:22:52.560731   22060 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 11:22:52.560736   22060 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 11:22:52.560794   22060 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1675 bytes)
	I0601 11:22:52.560909   22060 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.test-preload-20220601112248-16804 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-20220601112248-16804]
	I0601 11:22:52.721697   22060 provision.go:172] copyRemoteCerts
	I0601 11:22:52.721748   22060 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 11:22:52.721819   22060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601112248-16804
	I0601 11:22:52.791738   22060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65494 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/test-preload-20220601112248-16804/id_rsa Username:docker}
	I0601 11:22:52.879523   22060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0601 11:22:52.896520   22060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 11:22:52.914234   22060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 11:22:52.932131   22060 provision.go:86] duration metric: configureAuth took 441.055405ms
	I0601 11:22:52.932144   22060 ubuntu.go:193] setting minikube options for container-runtime
	I0601 11:22:52.932343   22060 config.go:178] Loaded profile config "test-preload-20220601112248-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0601 11:22:52.932453   22060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601112248-16804
	I0601 11:22:53.001972   22060 main.go:134] libmachine: Using SSH client type: native
	I0601 11:22:53.002304   22060 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 65494 <nil> <nil>}
	I0601 11:22:53.002319   22060 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 11:22:53.131060   22060 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 11:22:53.131072   22060 ubuntu.go:71] root file system type: overlay
	I0601 11:22:53.131180   22060 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 11:22:53.131249   22060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601112248-16804
	I0601 11:22:53.200517   22060 main.go:134] libmachine: Using SSH client type: native
	I0601 11:22:53.200677   22060 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 65494 <nil> <nil>}
	I0601 11:22:53.200737   22060 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 11:22:53.331499   22060 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 11:22:53.331582   22060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601112248-16804
	I0601 11:22:53.401071   22060 main.go:134] libmachine: Using SSH client type: native
	I0601 11:22:53.401252   22060 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 65494 <nil> <nil>}
	I0601 11:22:53.401265   22060 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 11:22:54.000636   22060 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 09:15:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-01 18:22:53.344292946 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0601 11:22:54.000663   22060 machine.go:91] provisioned docker machine in 1.906336553s
	I0601 11:22:54.000671   22060 client.go:171] LocalClient.Create took 4.331519835s
	I0601 11:22:54.000687   22060 start.go:173] duration metric: libmachine.API.Create for "test-preload-20220601112248-16804" took 4.331567332s
	I0601 11:22:54.000696   22060 start.go:306] post-start starting for "test-preload-20220601112248-16804" (driver="docker")
	I0601 11:22:54.000701   22060 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 11:22:54.000776   22060 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 11:22:54.000822   22060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601112248-16804
	I0601 11:22:54.071682   22060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65494 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/test-preload-20220601112248-16804/id_rsa Username:docker}
	I0601 11:22:54.162927   22060 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 11:22:54.166273   22060 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 11:22:54.166289   22060 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 11:22:54.166296   22060 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 11:22:54.166303   22060 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 11:22:54.166311   22060 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 11:22:54.166419   22060 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 11:22:54.166561   22060 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem -> 168042.pem in /etc/ssl/certs
	I0601 11:22:54.166703   22060 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 11:22:54.173870   22060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /etc/ssl/certs/168042.pem (1708 bytes)
	I0601 11:22:54.191534   22060 start.go:309] post-start completed in 190.823695ms
	I0601 11:22:54.192056   22060 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20220601112248-16804
	I0601 11:22:54.260823   22060 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601112248-16804/config.json ...
	I0601 11:22:54.261229   22060 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:22:54.261276   22060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601112248-16804
	I0601 11:22:54.330009   22060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65494 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/test-preload-20220601112248-16804/id_rsa Username:docker}
	I0601 11:22:54.414702   22060 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:22:54.419280   22060 start.go:134] duration metric: createHost completed in 4.772500825s
	I0601 11:22:54.419295   22060 start.go:81] releasing machines lock for "test-preload-20220601112248-16804", held for 4.772649134s
	I0601 11:22:54.419380   22060 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20220601112248-16804
	I0601 11:22:54.489953   22060 ssh_runner.go:195] Run: systemctl --version
	I0601 11:22:54.489979   22060 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 11:22:54.490015   22060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601112248-16804
	I0601 11:22:54.490064   22060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220601112248-16804
	I0601 11:22:54.563214   22060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65494 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/test-preload-20220601112248-16804/id_rsa Username:docker}
	I0601 11:22:54.563657   22060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65494 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/test-preload-20220601112248-16804/id_rsa Username:docker}
	I0601 11:22:54.647286   22060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 11:22:54.784375   22060 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 11:22:54.794718   22060 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 11:22:54.794770   22060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 11:22:54.804223   22060 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 11:22:54.817067   22060 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 11:22:54.881699   22060 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 11:22:54.952640   22060 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 11:22:54.962686   22060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 11:22:55.026167   22060 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 11:22:55.036233   22060 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 11:22:55.071673   22060 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 11:22:55.152315   22060 out.go:204] * Preparing Kubernetes v1.17.0 on Docker 20.10.16 ...
	I0601 11:22:55.152464   22060 cli_runner.go:164] Run: docker exec -t test-preload-20220601112248-16804 dig +short host.docker.internal
	I0601 11:22:55.286240   22060 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 11:22:55.286453   22060 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 11:22:55.290728   22060 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:22:55.300270   22060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" test-preload-20220601112248-16804
	I0601 11:22:55.371153   22060 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0601 11:22:55.371210   22060 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 11:22:55.400127   22060 docker.go:610] Got preloaded images: 
	I0601 11:22:55.400149   22060 docker.go:616] k8s.gcr.io/kube-apiserver:v1.17.0 wasn't preloaded
	I0601 11:22:55.400174   22060 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.17.0 k8s.gcr.io/kube-controller-manager:v1.17.0 k8s.gcr.io/kube-scheduler:v1.17.0 k8s.gcr.io/kube-proxy:v1.17.0 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.5 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0601 11:22:55.407523   22060 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 11:22:55.407874   22060 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0601 11:22:55.408447   22060 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0601 11:22:55.409187   22060 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0601 11:22:55.409531   22060 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0601 11:22:55.410352   22060 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0601 11:22:55.411025   22060 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0601 11:22:55.411535   22060 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
	I0601 11:22:55.416180   22060 image.go:180] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: reference does not exist
	I0601 11:22:55.417059   22060 image.go:180] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error response from daemon: reference does not exist
	I0601 11:22:55.417334   22060 image.go:180] daemon lookup for k8s.gcr.io/coredns:1.6.5: Error response from daemon: reference does not exist
	I0601 11:22:55.418273   22060 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.0: Error response from daemon: reference does not exist
	I0601 11:22:55.418874   22060 image.go:180] daemon lookup for k8s.gcr.io/pause:3.1: Error response from daemon: reference does not exist
	I0601 11:22:55.419087   22060 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.0: Error response from daemon: reference does not exist
	I0601 11:22:55.419460   22060 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.0: Error response from daemon: reference does not exist
	I0601 11:22:55.419783   22060 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.0: Error response from daemon: reference does not exist
	I0601 11:22:55.864010   22060 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0
	I0601 11:22:55.867040   22060 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/coredns:1.6.5
	I0601 11:22:55.890099   22060 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.17.0
	I0601 11:22:55.899826   22060 cache_images.go:116] "k8s.gcr.io/etcd:3.4.3-0" needs transfer: "k8s.gcr.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0601 11:22:55.899858   22060 docker.go:291] Removing image: k8s.gcr.io/etcd:3.4.3-0
	I0601 11:22:55.899917   22060 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/etcd:3.4.3-0
	I0601 11:22:55.900275   22060 cache_images.go:116] "k8s.gcr.io/coredns:1.6.5" needs transfer: "k8s.gcr.io/coredns:1.6.5" does not exist at hash "70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61" in container runtime
	I0601 11:22:55.900296   22060 docker.go:291] Removing image: k8s.gcr.io/coredns:1.6.5
	I0601 11:22:55.900335   22060 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/coredns:1.6.5
	I0601 11:22:55.912904   22060 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.17.0
	I0601 11:22:55.927812   22060 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.17.0" needs transfer: "k8s.gcr.io/kube-scheduler:v1.17.0" does not exist at hash "78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28" in container runtime
	I0601 11:22:55.927838   22060 docker.go:291] Removing image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0601 11:22:55.927924   22060 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-scheduler:v1.17.0
	I0601 11:22:55.941159   22060 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/pause:3.1
	I0601 11:22:55.941704   22060 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0
	I0601 11:22:55.941842   22060 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.4.3-0
	I0601 11:22:55.941950   22060 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5
	I0601 11:22:55.942064   22060 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_1.6.5
	I0601 11:22:55.965295   22060 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.17.0
	I0601 11:22:55.995858   22060 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.17.0" needs transfer: "k8s.gcr.io/kube-apiserver:v1.17.0" does not exist at hash "0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2" in container runtime
	I0601 11:22:55.995893   22060 docker.go:291] Removing image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0601 11:22:55.995959   22060 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-apiserver:v1.17.0
	I0601 11:22:56.010848   22060 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0
	I0601 11:22:56.010974   22060 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.17.0
	I0601 11:22:56.021055   22060 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.4.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.4.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.4.3-0': No such file or directory
	I0601 11:22:56.021093   22060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 --> /var/lib/minikube/images/etcd_3.4.3-0 (100950016 bytes)
	I0601 11:22:56.021111   22060 cache_images.go:116] "k8s.gcr.io/pause:3.1" needs transfer: "k8s.gcr.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0601 11:22:56.021132   22060 docker.go:291] Removing image: k8s.gcr.io/pause:3.1
	I0601 11:22:56.021157   22060 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.17.0" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.17.0" does not exist at hash "5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056" in container runtime
	I0601 11:22:56.021178   22060 docker.go:291] Removing image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0601 11:22:56.021178   22060 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/pause:3.1
	I0601 11:22:56.021183   22060 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_1.6.5: stat -c "%s %y" /var/lib/minikube/images/coredns_1.6.5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_1.6.5': No such file or directory
	I0601 11:22:56.021208   22060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 --> /var/lib/minikube/images/coredns_1.6.5 (13241856 bytes)
	I0601 11:22:56.021225   22060 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-controller-manager:v1.17.0
	I0601 11:22:56.032071   22060 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.17.0
	I0601 11:22:56.034067   22060 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 11:22:56.121058   22060 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0
	I0601 11:22:56.121087   22060 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.17.0': No such file or directory
	I0601 11:22:56.121127   22060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 --> /var/lib/minikube/images/kube-scheduler_v1.17.0 (33822208 bytes)
	I0601 11:22:56.121289   22060 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.17.0
	I0601 11:22:56.191948   22060 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0601 11:22:56.192026   22060 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0
	I0601 11:22:56.192160   22060 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.17.0
	I0601 11:22:56.192182   22060 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.1
	I0601 11:22:56.225162   22060 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.17.0" needs transfer: "k8s.gcr.io/kube-proxy:v1.17.0" does not exist at hash "7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19" in container runtime
	I0601 11:22:56.225194   22060 docker.go:291] Removing image: k8s.gcr.io/kube-proxy:v1.17.0
	I0601 11:22:56.225273   22060 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-proxy:v1.17.0
	I0601 11:22:56.237839   22060 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0601 11:22:56.237851   22060 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.17.0': No such file or directory
	I0601 11:22:56.237868   22060 docker.go:291] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 11:22:56.237885   22060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 --> /var/lib/minikube/images/kube-apiserver_v1.17.0 (50629632 bytes)
	I0601 11:22:56.237927   22060 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 11:22:56.246755   22060 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.1': No such file or directory
	I0601 11:22:56.246789   22060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 --> /var/lib/minikube/images/pause_3.1 (318976 bytes)
	I0601 11:22:56.249613   22060 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.17.0': No such file or directory
	I0601 11:22:56.249636   22060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 --> /var/lib/minikube/images/kube-controller-manager_v1.17.0 (48791552 bytes)
	I0601 11:22:56.315855   22060 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0
	I0601 11:22:56.316014   22060 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.17.0
	I0601 11:22:56.337694   22060 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0601 11:22:56.337861   22060 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0601 11:22:56.354601   22060 docker.go:258] Loading image: /var/lib/minikube/images/pause_3.1
	I0601 11:22:56.354615   22060 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.1 | docker load"
	I0601 11:22:56.381093   22060 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.17.0': No such file or directory
	I0601 11:22:56.381122   22060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 --> /var/lib/minikube/images/kube-proxy_v1.17.0 (48705536 bytes)
	I0601 11:22:56.401983   22060 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0601 11:22:56.402019   22060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0601 11:22:56.600392   22060 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 from cache
	I0601 11:22:57.154461   22060 docker.go:258] Loading image: /var/lib/minikube/images/coredns_1.6.5
	I0601 11:22:57.154478   22060 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_1.6.5 | docker load"
	I0601 11:22:57.996596   22060 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 from cache
	I0601 11:22:57.996623   22060 docker.go:258] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0601 11:22:57.996637   22060 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0601 11:22:58.594666   22060 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0601 11:22:59.457961   22060 docker.go:258] Loading image: /var/lib/minikube/images/kube-scheduler_v1.17.0
	I0601 11:22:59.457980   22060 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.17.0 | docker load"
	I0601 11:23:01.461873   22060 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.17.0 | docker load": (2.003813954s)
	I0601 11:23:01.461896   22060 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 from cache
	I0601 11:23:01.461929   22060 docker.go:258] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.17.0
	I0601 11:23:01.461938   22060 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.17.0 | docker load"
	I0601 11:23:02.532410   22060 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.17.0 | docker load": (1.070426835s)
	I0601 11:23:02.532440   22060 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 from cache
	I0601 11:23:02.532488   22060 docker.go:258] Loading image: /var/lib/minikube/images/kube-proxy_v1.17.0
	I0601 11:23:02.532499   22060 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.17.0 | docker load"
	I0601 11:23:03.647778   22060 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.17.0 | docker load": (1.115231668s)
	I0601 11:23:03.647792   22060 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 from cache
	I0601 11:23:03.647842   22060 docker.go:258] Loading image: /var/lib/minikube/images/kube-apiserver_v1.17.0
	I0601 11:23:03.647854   22060 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.17.0 | docker load"
	I0601 11:23:04.767431   22060 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.17.0 | docker load": (1.119525356s)
	I0601 11:23:04.767473   22060 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 from cache
	I0601 11:23:04.767488   22060 docker.go:258] Loading image: /var/lib/minikube/images/etcd_3.4.3-0
	I0601 11:23:04.767500   22060 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.4.3-0 | docker load"
	I0601 11:23:07.856302   22060 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.4.3-0 | docker load": (3.088695623s)
	I0601 11:23:07.856322   22060 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 from cache
	I0601 11:23:07.856362   22060 cache_images.go:123] Successfully loaded all cached images
	I0601 11:23:07.856365   22060 cache_images.go:92] LoadImages completed in 12.455789844s
	I0601 11:23:07.856500   22060 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 11:23:07.931600   22060 cni.go:95] Creating CNI manager for ""
	I0601 11:23:07.931612   22060 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:23:07.931621   22060 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 11:23:07.931632   22060 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.17.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-20220601112248-16804 NodeName:test-preload-20220601112248-16804 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFil
e:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 11:23:07.931724   22060 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "test-preload-20220601112248-16804"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.17.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 11:23:07.931783   22060 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.17.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=test-preload-20220601112248-16804 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220601112248-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 11:23:07.931840   22060 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.17.0
	I0601 11:23:07.939681   22060 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.17.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.17.0': No such file or directory
	
	Initiating transfer...
	I0601 11:23:07.939720   22060 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.17.0
	I0601 11:23:07.947404   22060 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/linux/amd64/v1.17.0/kubectl
	I0601 11:23:07.947402   22060 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubelet.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/linux/amd64/v1.17.0/kubelet
	I0601 11:23:07.947413   22060 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/linux/amd64/v1.17.0/kubeadm
	I0601 11:23:08.547815   22060 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubectl
	I0601 11:23:08.553029   22060 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubectl': No such file or directory
	I0601 11:23:08.553066   22060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/linux/amd64/v1.17.0/kubectl --> /var/lib/minikube/binaries/v1.17.0/kubectl (43495424 bytes)
	I0601 11:23:08.667491   22060 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubeadm
	I0601 11:23:08.826761   22060 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubeadm': No such file or directory
	I0601 11:23:08.839627   22060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/linux/amd64/v1.17.0/kubeadm --> /var/lib/minikube/binaries/v1.17.0/kubeadm (39342080 bytes)
	I0601 11:23:09.003841   22060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:23:09.089926   22060 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubelet
	I0601 11:23:09.162339   22060 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubelet': No such file or directory
	I0601 11:23:09.162372   22060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/linux/amd64/v1.17.0/kubelet --> /var/lib/minikube/binaries/v1.17.0/kubelet (111560216 bytes)
	I0601 11:23:12.011936   22060 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 11:23:12.020968   22060 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0601 11:23:12.035038   22060 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 11:23:12.048074   22060 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0601 11:23:12.061994   22060 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 11:23:12.066126   22060 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:23:12.077249   22060 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601112248-16804 for IP: 192.168.49.2
	I0601 11:23:12.077352   22060 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 11:23:12.077396   22060 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 11:23:12.077436   22060 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601112248-16804/client.key
	I0601 11:23:12.077446   22060 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601112248-16804/client.crt with IP's: []
	I0601 11:23:12.203084   22060 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601112248-16804/client.crt ...
	I0601 11:23:12.203095   22060 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601112248-16804/client.crt: {Name:mke28936867208351fcf85de143ef780f87fe854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:23:12.203439   22060 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601112248-16804/client.key ...
	I0601 11:23:12.203448   22060 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601112248-16804/client.key: {Name:mk9443ffe5dcbfad8f9e5982c1d598bad5b87f7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:23:12.203657   22060 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601112248-16804/apiserver.key.dd3b5fb2
	I0601 11:23:12.203674   22060 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601112248-16804/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0601 11:23:12.406804   22060 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601112248-16804/apiserver.crt.dd3b5fb2 ...
	I0601 11:23:12.406814   22060 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601112248-16804/apiserver.crt.dd3b5fb2: {Name:mka721f2ea72e937b2b372cdd449f3ec101822cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:23:12.407049   22060 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601112248-16804/apiserver.key.dd3b5fb2 ...
	I0601 11:23:12.407057   22060 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601112248-16804/apiserver.key.dd3b5fb2: {Name:mk0e224c01f1faeae4951f6e37ef79b8197784c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:23:12.407248   22060 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601112248-16804/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601112248-16804/apiserver.crt
	I0601 11:23:12.407412   22060 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601112248-16804/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601112248-16804/apiserver.key
	I0601 11:23:12.407572   22060 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601112248-16804/proxy-client.key
	I0601 11:23:12.407587   22060 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601112248-16804/proxy-client.crt with IP's: []
	I0601 11:23:12.560683   22060 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601112248-16804/proxy-client.crt ...
	I0601 11:23:12.560692   22060 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601112248-16804/proxy-client.crt: {Name:mk12b22fd51f2d2c24ed06b2f2bbdc5c24817925 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:23:12.560920   22060 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601112248-16804/proxy-client.key ...
	I0601 11:23:12.560928   22060 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601112248-16804/proxy-client.key: {Name:mkcec35756724913be6c973eb47cbc60de27fb06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:23:12.561289   22060 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem (1338 bytes)
	W0601 11:23:12.561329   22060 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804_empty.pem, impossibly tiny 0 bytes
	I0601 11:23:12.561338   22060 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1675 bytes)
	I0601 11:23:12.561371   22060 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 11:23:12.561402   22060 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 11:23:12.561430   22060 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1675 bytes)
	I0601 11:23:12.561498   22060 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem (1708 bytes)
	I0601 11:23:12.561984   22060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601112248-16804/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 11:23:12.580464   22060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601112248-16804/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 11:23:12.597993   22060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601112248-16804/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 11:23:12.616042   22060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/test-preload-20220601112248-16804/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 11:23:12.633715   22060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 11:23:12.650739   22060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 11:23:12.668614   22060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 11:23:12.685774   22060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0601 11:23:12.703364   22060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem --> /usr/share/ca-certificates/16804.pem (1338 bytes)
	I0601 11:23:12.720850   22060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /usr/share/ca-certificates/168042.pem (1708 bytes)
	I0601 11:23:12.738486   22060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 11:23:12.755353   22060 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 11:23:12.768105   22060 ssh_runner.go:195] Run: openssl version
	I0601 11:23:12.773540   22060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16804.pem && ln -fs /usr/share/ca-certificates/16804.pem /etc/ssl/certs/16804.pem"
	I0601 11:23:12.781319   22060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16804.pem
	I0601 11:23:12.785146   22060 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 18:01 /usr/share/ca-certificates/16804.pem
	I0601 11:23:12.785190   22060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16804.pem
	I0601 11:23:12.790482   22060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16804.pem /etc/ssl/certs/51391683.0"
	I0601 11:23:12.797981   22060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168042.pem && ln -fs /usr/share/ca-certificates/168042.pem /etc/ssl/certs/168042.pem"
	I0601 11:23:12.806052   22060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168042.pem
	I0601 11:23:12.810495   22060 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 18:01 /usr/share/ca-certificates/168042.pem
	I0601 11:23:12.810535   22060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168042.pem
	I0601 11:23:12.815836   22060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168042.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 11:23:12.823853   22060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 11:23:12.831918   22060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:23:12.836615   22060 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:23:12.836659   22060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:23:12.842212   22060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 11:23:12.850329   22060 kubeadm.go:395] StartCluster: {Name:test-preload-20220601112248-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220601112248-16804 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false}
	I0601 11:23:12.850421   22060 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 11:23:12.879630   22060 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 11:23:12.887612   22060 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:23:12.895453   22060 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:23:12.895494   22060 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:23:12.903193   22060 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:23:12.903217   22060 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:23:13.670672   22060 out.go:204]   - Generating certificates and keys ...
	I0601 11:23:16.462225   22060 out.go:204]   - Booting up control plane ...
	W0601 11:25:11.407913   22060 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [test-preload-20220601112248-16804 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [test-preload-20220601112248-16804 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0601 18:23:12.954179    1444 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0601 18:23:12.954259    1444 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0601 18:23:16.476834    1444 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0601 18:23:16.477580    1444 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [test-preload-20220601112248-16804 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [test-preload-20220601112248-16804 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0601 18:23:12.954179    1444 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0601 18:23:12.954259    1444 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0601 18:23:16.476834    1444 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0601 18:23:16.477580    1444 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0601 11:25:11.407962   22060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 11:25:11.828360   22060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:25:11.838100   22060 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:25:11.838147   22060 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:25:11.846236   22060 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:25:11.846253   22060 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:25:12.559014   22060 out.go:204]   - Generating certificates and keys ...
	I0601 11:25:12.987436   22060 out.go:204]   - Booting up control plane ...
	I0601 11:27:07.910805   22060 kubeadm.go:397] StartCluster complete in 3m55.05588723s
	I0601 11:27:07.910887   22060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:27:07.938682   22060 logs.go:274] 0 containers: []
	W0601 11:27:07.938694   22060 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:27:07.938750   22060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:27:07.967832   22060 logs.go:274] 0 containers: []
	W0601 11:27:07.967845   22060 logs.go:276] No container was found matching "etcd"
	I0601 11:27:07.967902   22060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:27:07.997384   22060 logs.go:274] 0 containers: []
	W0601 11:27:07.997397   22060 logs.go:276] No container was found matching "coredns"
	I0601 11:27:07.997452   22060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:27:08.025990   22060 logs.go:274] 0 containers: []
	W0601 11:27:08.026003   22060 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:27:08.026056   22060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:27:08.055292   22060 logs.go:274] 0 containers: []
	W0601 11:27:08.055306   22060 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:27:08.055361   22060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:27:08.085299   22060 logs.go:274] 0 containers: []
	W0601 11:27:08.085311   22060 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:27:08.085369   22060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:27:08.115127   22060 logs.go:274] 0 containers: []
	W0601 11:27:08.115140   22060 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:27:08.115195   22060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:27:08.145694   22060 logs.go:274] 0 containers: []
	W0601 11:27:08.145706   22060 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:27:08.145713   22060 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:27:08.145719   22060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:27:08.198595   22060 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:27:08.198607   22060 logs.go:123] Gathering logs for Docker ...
	I0601 11:27:08.198613   22060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:27:08.213124   22060 logs.go:123] Gathering logs for container status ...
	I0601 11:27:08.213136   22060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:27:10.268818   22060 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055612231s)
	I0601 11:27:10.268929   22060 logs.go:123] Gathering logs for kubelet ...
	I0601 11:27:10.268936   22060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:27:10.307666   22060 logs.go:123] Gathering logs for dmesg ...
	I0601 11:27:10.307680   22060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0601 11:27:10.321155   22060 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0601 18:25:11.896846    3740 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0601 18:25:11.896896    3740 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0601 18:25:12.979265    3740 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0601 18:25:12.980887    3740 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0601 11:27:10.321177   22060 out.go:239] * 
	* 
	W0601 11:27:10.321328   22060 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0601 18:25:11.896846    3740 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0601 18:25:11.896896    3740 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0601 18:25:12.979265    3740 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0601 18:25:12.980887    3740 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0601 18:25:11.896846    3740 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0601 18:25:11.896896    3740 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0601 18:25:12.979265    3740 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0601 18:25:12.980887    3740 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0601 11:27:10.321344   22060 out.go:239] * 
	* 
	W0601 11:27:10.321872   22060 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:27:10.407809   22060 out.go:177] 
	W0601 11:27:10.451625   22060 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0601 18:25:11.896846    3740 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0601 18:25:11.896896    3740 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0601 18:25:12.979265    3740 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0601 18:25:12.980887    3740 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0601 18:25:11.896846    3740 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0601 18:25:11.896896    3740 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0601 18:25:12.979265    3740 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0601 18:25:12.980887    3740 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0601 11:27:10.451750   22060 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0601 11:27:10.451813   22060 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0601 11:27:10.473640   22060 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:50: out/minikube-darwin-amd64 start -p test-preload-20220601112248-16804 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0 failed: exit status 109
panic.go:482: *** TestPreload FAILED at 2022-06-01 11:27:10.595128 -0700 PDT m=+1792.547050919
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect test-preload-20220601112248-16804
helpers_test.go:235: (dbg) docker inspect test-preload-20220601112248-16804:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c32a2c81c23aae020b16dffde9edad3fcaf185b79231a0db066369455f3f6a86",
	        "Created": "2022-06-01T18:22:50.812735736Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 91445,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T18:22:51.138502203Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/c32a2c81c23aae020b16dffde9edad3fcaf185b79231a0db066369455f3f6a86/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c32a2c81c23aae020b16dffde9edad3fcaf185b79231a0db066369455f3f6a86/hostname",
	        "HostsPath": "/var/lib/docker/containers/c32a2c81c23aae020b16dffde9edad3fcaf185b79231a0db066369455f3f6a86/hosts",
	        "LogPath": "/var/lib/docker/containers/c32a2c81c23aae020b16dffde9edad3fcaf185b79231a0db066369455f3f6a86/c32a2c81c23aae020b16dffde9edad3fcaf185b79231a0db066369455f3f6a86-json.log",
	        "Name": "/test-preload-20220601112248-16804",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "test-preload-20220601112248-16804:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "test-preload-20220601112248-16804",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0bca1ce951ca31d2f47aba191eb24ed0326c2190aea079488c1d3fcb0072c06c-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0bca1ce951ca31d2f47aba191eb24ed0326c2190aea079488c1d3fcb0072c06c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0bca1ce951ca31d2f47aba191eb24ed0326c2190aea079488c1d3fcb0072c06c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0bca1ce951ca31d2f47aba191eb24ed0326c2190aea079488c1d3fcb0072c06c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "test-preload-20220601112248-16804",
	                "Source": "/var/lib/docker/volumes/test-preload-20220601112248-16804/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "test-preload-20220601112248-16804",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "test-preload-20220601112248-16804",
	                "name.minikube.sigs.k8s.io": "test-preload-20220601112248-16804",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "df512eadf0fdc5f2c42d2a7dc63ac055231f8db9ae3bbb38254518c2cc0ffa59",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "65494"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "65495"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "65496"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "65497"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "65498"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/df512eadf0fd",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "test-preload-20220601112248-16804": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c32a2c81c23a",
	                        "test-preload-20220601112248-16804"
	                    ],
	                    "NetworkID": "06e2165822ada096d07e92956c9c44d1e806bbe814ee6572feac9d9346b06160",
	                    "EndpointID": "fc5c48ca61f75a61e32ea861bb459e2940b8a95c36f9396c796a99a78a3d23dc",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p test-preload-20220601112248-16804 -n test-preload-20220601112248-16804
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p test-preload-20220601112248-16804 -n test-preload-20220601112248-16804: exit status 6 (439.259918ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:27:11.102730   22217 status.go:413] kubeconfig endpoint: extract IP: "test-preload-20220601112248-16804" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "test-preload-20220601112248-16804" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "test-preload-20220601112248-16804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-20220601112248-16804
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-20220601112248-16804: (2.564257139s)
--- FAIL: TestPreload (264.92s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (47.17s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.1192444221.exe start -p running-upgrade-20220601113155-16804 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.1192444221.exe start -p running-upgrade-20220601113155-16804 --memory=2200 --vm-driver=docker : exit status 70 (32.177780574s)

                                                
                                                
-- stdout --
	! [running-upgrade-20220601113155-16804] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig3917639906
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-01 18:32:09.530410980 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "running-upgrade-20220601113155-16804" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-01 18:32:25.774410938 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p running-upgrade-20220601113155-16804", then "minikube start -p running-upgrade-20220601113155-16804 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.25.2 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.25.2
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 24.50 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 73.44 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 120.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 171.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 224.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 256.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 294.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 296.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 344.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 398.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 447.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 493.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-01 18:32:25.774410938 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.1192444221.exe start -p running-upgrade-20220601113155-16804 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.1192444221.exe start -p running-upgrade-20220601113155-16804 --memory=2200 --vm-driver=docker : exit status 70 (4.468375998s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220601113155-16804] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig543202111
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-20220601113155-16804" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.1192444221.exe start -p running-upgrade-20220601113155-16804 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.1192444221.exe start -p running-upgrade-20220601113155-16804 --memory=2200 --vm-driver=docker : exit status 70 (4.490725708s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220601113155-16804] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig1087194237
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-20220601113155-16804" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: legacy v1.9.0 start failed: exit status 70
panic.go:482: *** TestRunningBinaryUpgrade FAILED at 2022-06-01 11:32:39.734053 -0700 PDT m=+2121.676735002
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-20220601113155-16804
helpers_test.go:235: (dbg) docker inspect running-upgrade-20220601113155-16804:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dacd000b69161299717a9401a834f8a600e0c9623c84f88c84b83c4177692c0f",
	        "Created": "2022-06-01T18:32:17.743488117Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 122431,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T18:32:18.00966207Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/dacd000b69161299717a9401a834f8a600e0c9623c84f88c84b83c4177692c0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dacd000b69161299717a9401a834f8a600e0c9623c84f88c84b83c4177692c0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/dacd000b69161299717a9401a834f8a600e0c9623c84f88c84b83c4177692c0f/hosts",
	        "LogPath": "/var/lib/docker/containers/dacd000b69161299717a9401a834f8a600e0c9623c84f88c84b83c4177692c0f/dacd000b69161299717a9401a834f8a600e0c9623c84f88c84b83c4177692c0f-json.log",
	        "Name": "/running-upgrade-20220601113155-16804",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-20220601113155-16804:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/754bd19d9a688a0c9fc66b3495d84baa02059269cd8bbe6fced647be7f6b4282-init/diff:/var/lib/docker/overlay2/5a5021b04d40486c3f899d3d86469c69d0a0a3a6aedb4a262808e8e0e3212dd9/diff:/var/lib/docker/overlay2/34d2fad93be8a8b08db19932b165d6e4ee12c642f5b9a71ae0da16e41e895455/diff:/var/lib/docker/overlay2/a519d8b71fe163aad87235d12fd7596db7d55f7f2c546ea938ac5b44f16b652f/diff:/var/lib/docker/overlay2/2f15e48f7fd9f51c0246edf680b5bf5101d756e18f610fe615defe179c7ff534/diff:/var/lib/docker/overlay2/b3950a464734420ac98826fd7846d239d550db1d1ae773f32fd285af845cdf22/diff:/var/lib/docker/overlay2/8988ddfdbc34033c8f6dfbda80a939b635699c7799196fc6e1c67870aa3a98fe/diff:/var/lib/docker/overlay2/7ba0245eca92a262dcf5985ae53e44b4246b2148cf3041b19299c4824436c857/diff:/var/lib/docker/overlay2/6c8ceadb783c54050c9822b7a9c7e32f5c8c95922ec59c1027de2484daecd2b4/diff:/var/lib/docker/overlay2/35b8de062c6e2440d11c06c0221db2bc4763da7dcc75f1ff234a1a6620f908c0/diff:/var/lib/docker/overlay2/3584c2
bd1bdbc4f33ae8409b002bb9449ef69f5eac5efaf3029bafd8e59e616d/diff:/var/lib/docker/overlay2/89f35c1cfd5f4b4711c8faf3c75a939b4b42ad8280d52e46ed9174898ebd4dea/diff:/var/lib/docker/overlay2/ba52e45aa55684244ce68ffb6f37275e672a920729ea5be00e4cc02625a11336/diff:/var/lib/docker/overlay2/88f06922766e6932db8f1d9662f093b42c354676160da5d7d627df01138940d2/diff:/var/lib/docker/overlay2/e30f8690cf13147aeb6cc0f6af6a5cc429942a49d65fc69df4976e32002b2c9c/diff:/var/lib/docker/overlay2/a013d03dab2547e58c77f48109fc20ac70497dba6843d25ae3705c054244401e/diff:/var/lib/docker/overlay2/cdb70bf8140c088f0dea40152c2a2ce37a40912c2a58e90e93f143d49795084f/diff:/var/lib/docker/overlay2/65b836a39622281946b823eb252606e8e09382a0f51a3fd2000a31247d55db47/diff:/var/lib/docker/overlay2/ba32c157bb001a6bdee2dd25782f9072b8f2c1f17dd60711c5dc96767ca3633e/diff:/var/lib/docker/overlay2/ebafcf8827f052a7339d84dae13db8562e7c9ff8c83ab195475000d74a29cb36/diff:/var/lib/docker/overlay2/be3502d132a8b884468dd4a5bcd811e32bd090fb7b255d888e53c9d4014ba2e0/diff:/var/lib/d
ocker/overlay2/f3b71613f15fd8e9cf665f9751d01943a85c6e1f36bc8a4317db3788ca9a6d68/diff",
	                "MergedDir": "/var/lib/docker/overlay2/754bd19d9a688a0c9fc66b3495d84baa02059269cd8bbe6fced647be7f6b4282/merged",
	                "UpperDir": "/var/lib/docker/overlay2/754bd19d9a688a0c9fc66b3495d84baa02059269cd8bbe6fced647be7f6b4282/diff",
	                "WorkDir": "/var/lib/docker/overlay2/754bd19d9a688a0c9fc66b3495d84baa02059269cd8bbe6fced647be7f6b4282/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-20220601113155-16804",
	                "Source": "/var/lib/docker/volumes/running-upgrade-20220601113155-16804/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-20220601113155-16804",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-20220601113155-16804",
	                "name.minikube.sigs.k8s.io": "running-upgrade-20220601113155-16804",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1020025c13898f133985f019d1910160259c7159fe463bcfda034ffa49d33034",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51968"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51969"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51970"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1020025c1389",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "478a0d534f800c64ce941f2a7e53b61a9460216a30a6a8c1b5b24e9a990f404c",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "f7b7bc4a3013ef683cda90da688bca751e398b5f704691ef347943a02e924737",
	                    "EndpointID": "478a0d534f800c64ce941f2a7e53b61a9460216a30a6a8c1b5b24e9a990f404c",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-20220601113155-16804 -n running-upgrade-20220601113155-16804
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-20220601113155-16804 -n running-upgrade-20220601113155-16804: exit status 6 (413.831913ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:32:40.206761   24018 status.go:413] kubeconfig endpoint: extract IP: "running-upgrade-20220601113155-16804" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-20220601113155-16804" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-20220601113155-16804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-20220601113155-16804
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-20220601113155-16804: (2.405124597s)
--- FAIL: TestRunningBinaryUpgrade (47.17s)

                                                
                                    
x
+
TestKubernetesUpgrade (301.4s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220601113329-16804 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
E0601 11:34:07.937883   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory
E0601 11:34:38.254443   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601112852-16804/client.crt: no such file or directory
E0601 11:34:38.260855   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601112852-16804/client.crt: no such file or directory
E0601 11:34:38.273145   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601112852-16804/client.crt: no such file or directory
E0601 11:34:38.295287   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601112852-16804/client.crt: no such file or directory
E0601 11:34:38.335669   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601112852-16804/client.crt: no such file or directory
E0601 11:34:38.417048   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601112852-16804/client.crt: no such file or directory
E0601 11:34:38.578197   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601112852-16804/client.crt: no such file or directory
E0601 11:34:38.900385   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601112852-16804/client.crt: no such file or directory
E0601 11:34:39.542715   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601112852-16804/client.crt: no such file or directory
E0601 11:34:40.824161   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601112852-16804/client.crt: no such file or directory
E0601 11:34:43.385791   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601112852-16804/client.crt: no such file or directory
E0601 11:34:48.506793   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601112852-16804/client.crt: no such file or directory
E0601 11:34:58.749345   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601112852-16804/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220601113329-16804 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m12.488211197s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220601113329-16804] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node kubernetes-upgrade-20220601113329-16804 in cluster kubernetes-upgrade-20220601113329-16804
	* Pulling base image ...
	* Downloading Kubernetes v1.16.0 preload ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:33:29.108786   24372 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:33:29.108981   24372 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:33:29.108987   24372 out.go:309] Setting ErrFile to fd 2...
	I0601 11:33:29.108990   24372 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:33:29.109084   24372 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:33:29.109401   24372 out.go:303] Setting JSON to false
	I0601 11:33:29.124232   24372 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":7379,"bootTime":1654101030,"procs":355,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 11:33:29.124352   24372 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:33:29.146451   24372 out.go:177] * [kubernetes-upgrade-20220601113329-16804] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 11:33:29.189364   24372 notify.go:193] Checking for updates...
	I0601 11:33:29.211303   24372 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:33:29.233013   24372 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:33:29.254376   24372 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 11:33:29.276406   24372 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:33:29.298099   24372 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:33:29.324967   24372 config.go:178] Loaded profile config "cert-expiration-20220601113122-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:33:29.325060   24372 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:33:29.396122   24372 docker.go:137] docker version: linux-20.10.14
	I0601 11:33:29.396241   24372 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:33:29.522327   24372 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:46 SystemTime:2022-06-01 18:33:29.472482219 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:33:29.544179   24372 out.go:177] * Using the docker driver based on user configuration
	I0601 11:33:29.564991   24372 start.go:284] selected driver: docker
	I0601 11:33:29.565014   24372 start.go:806] validating driver "docker" against <nil>
	I0601 11:33:29.565036   24372 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:33:29.568432   24372 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:33:29.694076   24372 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:46 SystemTime:2022-06-01 18:33:29.645631424 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:33:29.694272   24372 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 11:33:29.694420   24372 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0601 11:33:29.716264   24372 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 11:33:29.738093   24372 cni.go:95] Creating CNI manager for ""
	I0601 11:33:29.738150   24372 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:33:29.738163   24372 start_flags.go:306] config:
	{Name:kubernetes-upgrade-20220601113329-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220601113329-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:33:29.759855   24372 out.go:177] * Starting control plane node kubernetes-upgrade-20220601113329-16804 in cluster kubernetes-upgrade-20220601113329-16804
	I0601 11:33:29.781132   24372 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:33:29.802853   24372 out.go:177] * Pulling base image ...
	I0601 11:33:29.845973   24372 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 11:33:29.846072   24372 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:33:29.910004   24372 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 11:33:29.910026   24372 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 11:33:29.918061   24372 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0601 11:33:29.918081   24372 cache.go:57] Caching tarball of preloaded images
	I0601 11:33:29.918402   24372 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 11:33:29.962056   24372 out.go:177] * Downloading Kubernetes v1.16.0 preload ...
	I0601 11:33:29.984124   24372 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0601 11:33:30.084537   24372 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0601 11:33:32.648470   24372 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0601 11:33:32.648643   24372 preload.go:256] verifying checksumm of /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0601 11:33:33.184888   24372 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0601 11:33:33.185019   24372 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/config.json ...
	I0601 11:33:33.185071   24372 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/config.json: {Name:mk69586811be23b01923e655e5b0d50c2535fa45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:33:33.185495   24372 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:33:33.185565   24372 start.go:352] acquiring machines lock for kubernetes-upgrade-20220601113329-16804: {Name:mk0b48f54b7128e5ca7288952f3a0511e7c50fb4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:33:33.185687   24372 start.go:356] acquired machines lock for "kubernetes-upgrade-20220601113329-16804" in 114.085µs
	I0601 11:33:33.185724   24372 start.go:91] Provisioning new machine with config: &{Name:kubernetes-upgrade-20220601113329-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-2022060111332
9-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP:
Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 11:33:33.185817   24372 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:33:33.206887   24372 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:33:33.207212   24372 start.go:165] libmachine.API.Create for "kubernetes-upgrade-20220601113329-16804" (driver="docker")
	I0601 11:33:33.207266   24372 client.go:168] LocalClient.Create starting
	I0601 11:33:33.207405   24372 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem
	I0601 11:33:33.207469   24372 main.go:134] libmachine: Decoding PEM data...
	I0601 11:33:33.207491   24372 main.go:134] libmachine: Parsing certificate...
	I0601 11:33:33.207624   24372 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem
	I0601 11:33:33.207673   24372 main.go:134] libmachine: Decoding PEM data...
	I0601 11:33:33.207691   24372 main.go:134] libmachine: Parsing certificate...
	I0601 11:33:33.208479   24372 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220601113329-16804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:33:33.277523   24372 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220601113329-16804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:33:33.277622   24372 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220601113329-16804] to gather additional debugging logs...
	I0601 11:33:33.277641   24372 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220601113329-16804
	W0601 11:33:33.340405   24372 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220601113329-16804 returned with exit code 1
	I0601 11:33:33.340429   24372 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220601113329-16804]: docker network inspect kubernetes-upgrade-20220601113329-16804: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20220601113329-16804
	I0601 11:33:33.340452   24372 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220601113329-16804]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20220601113329-16804
	
	** /stderr **
	I0601 11:33:33.340528   24372 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:33:33.402490   24372 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00058ca20] misses:0}
	I0601 11:33:33.402526   24372 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:33:33.402542   24372 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220601113329-16804 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:33:33.402604   24372 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220601113329-16804
	I0601 11:33:33.499103   24372 network_create.go:99] docker network kubernetes-upgrade-20220601113329-16804 192.168.49.0/24 created
	I0601 11:33:33.499139   24372 kic.go:106] calculated static IP "192.168.49.2" for the "kubernetes-upgrade-20220601113329-16804" container
	I0601 11:33:33.499218   24372 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:33:33.562071   24372 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-20220601113329-16804 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220601113329-16804 --label created_by.minikube.sigs.k8s.io=true
	I0601 11:33:33.624512   24372 oci.go:103] Successfully created a docker volume kubernetes-upgrade-20220601113329-16804
	I0601 11:33:33.624615   24372 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-20220601113329-16804-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220601113329-16804 --entrypoint /usr/bin/test -v kubernetes-upgrade-20220601113329-16804:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -d /var/lib
	I0601 11:33:34.077948   24372 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-20220601113329-16804
	I0601 11:33:34.078003   24372 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 11:33:34.078017   24372 kic.go:179] Starting extracting preloaded images to volume ...
	I0601 11:33:34.078238   24372 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20220601113329-16804:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir
	I0601 11:33:38.369502   24372 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20220601113329-16804:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir: (4.291069769s)
	I0601 11:33:38.369526   24372 kic.go:188] duration metric: took 4.291388 seconds to extract preloaded images to volume
	I0601 11:33:38.369657   24372 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0601 11:33:38.519851   24372 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-20220601113329-16804 --name kubernetes-upgrade-20220601113329-16804 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220601113329-16804 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-20220601113329-16804 --network kubernetes-upgrade-20220601113329-16804 --ip 192.168.49.2 --volume kubernetes-upgrade-20220601113329-16804:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a
	I0601 11:33:38.886064   24372 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601113329-16804 --format={{.State.Running}}
	I0601 11:33:38.956332   24372 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601113329-16804 --format={{.State.Status}}
	I0601 11:33:39.036488   24372 cli_runner.go:164] Run: docker exec kubernetes-upgrade-20220601113329-16804 stat /var/lib/dpkg/alternatives/iptables
	I0601 11:33:39.191943   24372 oci.go:247] the created container "kubernetes-upgrade-20220601113329-16804" has a running status.
	I0601 11:33:39.191975   24372 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/kubernetes-upgrade-20220601113329-16804/id_rsa...
	I0601 11:33:39.310707   24372 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/kubernetes-upgrade-20220601113329-16804/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0601 11:33:39.428082   24372 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601113329-16804 --format={{.State.Status}}
	I0601 11:33:39.494421   24372 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0601 11:33:39.494440   24372 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-20220601113329-16804 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0601 11:33:39.618287   24372 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601113329-16804 --format={{.State.Status}}
	I0601 11:33:39.685256   24372 machine.go:88] provisioning docker machine ...
	I0601 11:33:39.685297   24372 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20220601113329-16804"
	I0601 11:33:39.685405   24372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601113329-16804
	I0601 11:33:39.752300   24372 main.go:134] libmachine: Using SSH client type: native
	I0601 11:33:39.752579   24372 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52928 <nil> <nil>}
	I0601 11:33:39.752603   24372 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20220601113329-16804 && echo "kubernetes-upgrade-20220601113329-16804" | sudo tee /etc/hostname
	I0601 11:33:39.876576   24372 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20220601113329-16804
	
	I0601 11:33:39.876665   24372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601113329-16804
	I0601 11:33:39.943361   24372 main.go:134] libmachine: Using SSH client type: native
	I0601 11:33:39.943540   24372 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52928 <nil> <nil>}
	I0601 11:33:39.943556   24372 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20220601113329-16804' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20220601113329-16804/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20220601113329-16804' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 11:33:40.062623   24372 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:33:40.062644   24372 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 11:33:40.062671   24372 ubuntu.go:177] setting up certificates
	I0601 11:33:40.062679   24372 provision.go:83] configureAuth start
	I0601 11:33:40.062737   24372 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220601113329-16804
	I0601 11:33:40.129104   24372 provision.go:138] copyHostCerts
	I0601 11:33:40.129295   24372 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 11:33:40.129303   24372 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 11:33:40.129403   24372 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 11:33:40.129580   24372 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 11:33:40.129593   24372 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 11:33:40.129656   24372 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 11:33:40.129836   24372 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 11:33:40.129842   24372 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 11:33:40.129901   24372 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1675 bytes)
	I0601 11:33:40.130025   24372 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20220601113329-16804 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20220601113329-16804]
	I0601 11:33:40.214545   24372 provision.go:172] copyRemoteCerts
	I0601 11:33:40.214592   24372 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 11:33:40.214635   24372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601113329-16804
	I0601 11:33:40.282109   24372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52928 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/kubernetes-upgrade-20220601113329-16804/id_rsa Username:docker}
	I0601 11:33:40.368212   24372 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1289 bytes)
	I0601 11:33:40.384899   24372 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 11:33:40.401696   24372 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 11:33:40.418068   24372 provision.go:86] duration metric: configureAuth took 355.367396ms
	I0601 11:33:40.418080   24372 ubuntu.go:193] setting minikube options for container-runtime
	I0601 11:33:40.418215   24372 config.go:178] Loaded profile config "kubernetes-upgrade-20220601113329-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0601 11:33:40.418271   24372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601113329-16804
	I0601 11:33:40.484832   24372 main.go:134] libmachine: Using SSH client type: native
	I0601 11:33:40.484982   24372 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52928 <nil> <nil>}
	I0601 11:33:40.485014   24372 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 11:33:40.602347   24372 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 11:33:40.602362   24372 ubuntu.go:71] root file system type: overlay
	I0601 11:33:40.602509   24372 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 11:33:40.602598   24372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601113329-16804
	I0601 11:33:40.669168   24372 main.go:134] libmachine: Using SSH client type: native
	I0601 11:33:40.669338   24372 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52928 <nil> <nil>}
	I0601 11:33:40.669386   24372 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 11:33:40.800538   24372 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 11:33:40.800611   24372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601113329-16804
	I0601 11:33:40.868006   24372 main.go:134] libmachine: Using SSH client type: native
	I0601 11:33:40.868163   24372 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 52928 <nil> <nil>}
	I0601 11:33:40.868177   24372 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 11:33:41.518956   24372 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 09:15:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-01 18:33:40.804804977 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0601 11:33:41.518980   24372 machine.go:91] provisioned docker machine in 1.833653286s
	I0601 11:33:41.518987   24372 client.go:171] LocalClient.Create took 8.311482031s
	I0601 11:33:41.519001   24372 start.go:173] duration metric: libmachine.API.Create for "kubernetes-upgrade-20220601113329-16804" took 8.311557923s
	I0601 11:33:41.519010   24372 start.go:306] post-start starting for "kubernetes-upgrade-20220601113329-16804" (driver="docker")
	I0601 11:33:41.519014   24372 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 11:33:41.519109   24372 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 11:33:41.519164   24372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601113329-16804
	I0601 11:33:41.586116   24372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52928 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/kubernetes-upgrade-20220601113329-16804/id_rsa Username:docker}
	I0601 11:33:41.675423   24372 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 11:33:41.678954   24372 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 11:33:41.678970   24372 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 11:33:41.678983   24372 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 11:33:41.678988   24372 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 11:33:41.678995   24372 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 11:33:41.679093   24372 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 11:33:41.679259   24372 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem -> 168042.pem in /etc/ssl/certs
	I0601 11:33:41.679416   24372 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 11:33:41.686099   24372 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /etc/ssl/certs/168042.pem (1708 bytes)
	I0601 11:33:41.703420   24372 start.go:309] post-start completed in 184.396333ms
	I0601 11:33:41.703971   24372 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220601113329-16804
	I0601 11:33:41.775230   24372 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/config.json ...
	I0601 11:33:41.775633   24372 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:33:41.775681   24372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601113329-16804
	I0601 11:33:41.848542   24372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52928 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/kubernetes-upgrade-20220601113329-16804/id_rsa Username:docker}
	I0601 11:33:41.936958   24372 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:33:41.941246   24372 start.go:134] duration metric: createHost completed in 8.755157873s
	I0601 11:33:41.941261   24372 start.go:81] releasing machines lock for "kubernetes-upgrade-20220601113329-16804", held for 8.755320767s
	I0601 11:33:41.941322   24372 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220601113329-16804
	I0601 11:33:42.008443   24372 ssh_runner.go:195] Run: systemctl --version
	I0601 11:33:42.008505   24372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601113329-16804
	I0601 11:33:42.008454   24372 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 11:33:42.011687   24372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601113329-16804
	I0601 11:33:42.082415   24372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52928 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/kubernetes-upgrade-20220601113329-16804/id_rsa Username:docker}
	I0601 11:33:42.085245   24372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52928 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/kubernetes-upgrade-20220601113329-16804/id_rsa Username:docker}
	I0601 11:33:42.165422   24372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 11:33:42.298226   24372 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 11:33:42.307612   24372 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 11:33:42.307676   24372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 11:33:42.316354   24372 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 11:33:42.328659   24372 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 11:33:42.401733   24372 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 11:33:42.469322   24372 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 11:33:42.478865   24372 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 11:33:42.544471   24372 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 11:33:42.553910   24372 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 11:33:42.587816   24372 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 11:33:42.664590   24372 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.16 ...
	I0601 11:33:42.664771   24372 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-20220601113329-16804 dig +short host.docker.internal
	I0601 11:33:42.796462   24372 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 11:33:42.796556   24372 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 11:33:42.801157   24372 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:33:42.810538   24372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601113329-16804
	I0601 11:33:42.879900   24372 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 11:33:42.879977   24372 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 11:33:42.909908   24372 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0601 11:33:42.909927   24372 docker.go:541] Images already preloaded, skipping extraction
	I0601 11:33:42.909989   24372 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 11:33:42.939519   24372 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0601 11:33:42.939538   24372 cache_images.go:84] Images are preloaded, skipping loading
	I0601 11:33:42.939608   24372 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 11:33:43.012557   24372 cni.go:95] Creating CNI manager for ""
	I0601 11:33:43.012570   24372 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:33:43.012579   24372 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 11:33:43.012609   24372 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20220601113329-16804 NodeName:kubernetes-upgrade-20220601113329-16804 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd
ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 11:33:43.012708   24372 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-20220601113329-16804"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-20220601113329-16804
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.49.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 11:33:43.012787   24372 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-20220601113329-16804 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220601113329-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 11:33:43.012841   24372 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0601 11:33:43.020528   24372 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 11:33:43.020586   24372 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 11:33:43.027844   24372 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I0601 11:33:43.040547   24372 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 11:33:43.052776   24372 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0601 11:33:43.065071   24372 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 11:33:43.068839   24372 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:33:43.078370   24372 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804 for IP: 192.168.49.2
	I0601 11:33:43.078488   24372 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 11:33:43.078534   24372 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 11:33:43.078574   24372 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/client.key
	I0601 11:33:43.078585   24372 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/client.crt with IP's: []
	I0601 11:33:43.195218   24372 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/client.crt ...
	I0601 11:33:43.195228   24372 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/client.crt: {Name:mkecf5b49c570636940a4c7b6184c56b3e2754e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:33:43.195570   24372 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/client.key ...
	I0601 11:33:43.195578   24372 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/client.key: {Name:mke91bebe96bd793700fb0c9b8c25f8e144119c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:33:43.195780   24372 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/apiserver.key.dd3b5fb2
	I0601 11:33:43.195799   24372 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0601 11:33:43.278481   24372 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/apiserver.crt.dd3b5fb2 ...
	I0601 11:33:43.278488   24372 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/apiserver.crt.dd3b5fb2: {Name:mk88eca4f4efffb39da509508ebb07ee1265e0d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:33:43.278719   24372 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/apiserver.key.dd3b5fb2 ...
	I0601 11:33:43.278726   24372 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/apiserver.key.dd3b5fb2: {Name:mk6a3a4e82be705efeb79f604410b16d87d263be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:33:43.278929   24372 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/apiserver.crt
	I0601 11:33:43.279118   24372 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/apiserver.key
	I0601 11:33:43.279305   24372 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/proxy-client.key
	I0601 11:33:43.279318   24372 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/proxy-client.crt with IP's: []
	I0601 11:33:43.371832   24372 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/proxy-client.crt ...
	I0601 11:33:43.371841   24372 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/proxy-client.crt: {Name:mk544c07922cc234cc53419ef63196744c4873ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:33:43.372091   24372 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/proxy-client.key ...
	I0601 11:33:43.372100   24372 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/proxy-client.key: {Name:mke1c42fae407b5c0ecc3206ba3fe3c1c53dbfbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:33:43.372549   24372 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem (1338 bytes)
	W0601 11:33:43.372616   24372 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804_empty.pem, impossibly tiny 0 bytes
	I0601 11:33:43.372642   24372 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1675 bytes)
	I0601 11:33:43.372709   24372 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 11:33:43.372764   24372 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 11:33:43.372827   24372 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1675 bytes)
	I0601 11:33:43.372971   24372 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem (1708 bytes)
	I0601 11:33:43.373508   24372 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 11:33:43.391570   24372 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 11:33:43.409406   24372 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 11:33:43.426275   24372 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0601 11:33:43.443554   24372 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 11:33:43.460447   24372 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 11:33:43.480867   24372 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 11:33:43.497678   24372 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0601 11:33:43.514738   24372 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 11:33:43.532101   24372 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem --> /usr/share/ca-certificates/16804.pem (1338 bytes)
	I0601 11:33:43.549142   24372 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /usr/share/ca-certificates/168042.pem (1708 bytes)
	I0601 11:33:43.565469   24372 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 11:33:43.578165   24372 ssh_runner.go:195] Run: openssl version
	I0601 11:33:43.583332   24372 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 11:33:43.591086   24372 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:33:43.595308   24372 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:33:43.595352   24372 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:33:43.600583   24372 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 11:33:43.608293   24372 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16804.pem && ln -fs /usr/share/ca-certificates/16804.pem /etc/ssl/certs/16804.pem"
	I0601 11:33:43.615798   24372 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16804.pem
	I0601 11:33:43.619369   24372 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 18:01 /usr/share/ca-certificates/16804.pem
	I0601 11:33:43.619407   24372 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16804.pem
	I0601 11:33:43.624625   24372 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16804.pem /etc/ssl/certs/51391683.0"
	I0601 11:33:43.632100   24372 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168042.pem && ln -fs /usr/share/ca-certificates/168042.pem /etc/ssl/certs/168042.pem"
	I0601 11:33:43.639516   24372 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168042.pem
	I0601 11:33:43.643182   24372 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 18:01 /usr/share/ca-certificates/168042.pem
	I0601 11:33:43.643221   24372 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168042.pem
	I0601 11:33:43.648355   24372 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168042.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 11:33:43.655678   24372 kubeadm.go:395] StartCluster: {Name:kubernetes-upgrade-20220601113329-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220601113329-16804 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:33:43.655757   24372 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 11:33:43.683330   24372 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 11:33:43.690766   24372 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:33:43.697751   24372 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:33:43.697791   24372 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:33:43.704901   24372 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:33:43.704926   24372 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:33:44.418775   24372 out.go:204]   - Generating certificates and keys ...
	I0601 11:33:47.368544   24372 out.go:204]   - Booting up control plane ...
	W0601 11:35:42.293202   24372 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-20220601113329-16804 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-20220601113329-16804 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-20220601113329-16804 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-20220601113329-16804 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0601 11:35:42.293237   24372 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 11:35:42.712680   24372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:35:42.722541   24372 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:35:42.722596   24372 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:35:42.730510   24372 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:35:42.730533   24372 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:35:43.440246   24372 out.go:204]   - Generating certificates and keys ...
	I0601 11:35:43.983732   24372 out.go:204]   - Booting up control plane ...
	I0601 11:37:38.903701   24372 kubeadm.go:397] StartCluster complete in 3m55.241420883s
	I0601 11:37:38.903777   24372 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:37:38.932641   24372 logs.go:274] 0 containers: []
	W0601 11:37:38.932653   24372 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:37:38.932709   24372 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:37:38.961122   24372 logs.go:274] 0 containers: []
	W0601 11:37:38.961134   24372 logs.go:276] No container was found matching "etcd"
	I0601 11:37:38.961191   24372 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:37:38.989513   24372 logs.go:274] 0 containers: []
	W0601 11:37:38.989526   24372 logs.go:276] No container was found matching "coredns"
	I0601 11:37:38.989583   24372 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:37:39.018962   24372 logs.go:274] 0 containers: []
	W0601 11:37:39.018976   24372 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:37:39.019036   24372 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:37:39.049349   24372 logs.go:274] 0 containers: []
	W0601 11:37:39.049361   24372 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:37:39.049418   24372 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:37:39.078733   24372 logs.go:274] 0 containers: []
	W0601 11:37:39.078752   24372 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:37:39.078807   24372 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:37:39.108440   24372 logs.go:274] 0 containers: []
	W0601 11:37:39.108452   24372 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:37:39.108516   24372 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:37:39.139221   24372 logs.go:274] 0 containers: []
	W0601 11:37:39.139233   24372 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:37:39.139240   24372 logs.go:123] Gathering logs for Docker ...
	I0601 11:37:39.139247   24372 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:37:39.153250   24372 logs.go:123] Gathering logs for container status ...
	I0601 11:37:39.153264   24372 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:37:41.213888   24372 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060554513s)
	I0601 11:37:41.213997   24372 logs.go:123] Gathering logs for kubelet ...
	I0601 11:37:41.214003   24372 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:37:41.252627   24372 logs.go:123] Gathering logs for dmesg ...
	I0601 11:37:41.252639   24372 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:37:41.265444   24372 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:37:41.265455   24372 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:37:41.316861   24372 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0601 11:37:41.316882   24372 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0601 11:37:41.316896   24372 out.go:239] * 
	* 
	W0601 11:37:41.317025   24372 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0601 11:37:41.317039   24372 out.go:239] * 
	* 
	W0601 11:37:41.317614   24372 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:37:41.386419   24372 out.go:177] 
	W0601 11:37:41.429605   24372 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0601 11:37:41.429764   24372 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0601 11:37:41.429857   24372 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0601 11:37:41.472305   24372 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:231: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220601113329-16804 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20220601113329-16804
version_upgrade_test.go:234: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20220601113329-16804: (1.641151792s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-20220601113329-16804 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-20220601113329-16804 status --format={{.Host}}: exit status 7 (118.010691ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220601113329-16804 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:250: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220601113329-16804 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker : (22.865450291s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220601113329-16804 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220601113329-16804 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220601113329-16804 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (732.153551ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220601113329-16804] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.23.6 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220601113329-16804
	    minikube start -p kubernetes-upgrade-20220601113329-16804 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220601113329-168042 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.23.6, by running:
	    
	    minikube start -p kubernetes-upgrade-20220601113329-16804 --kubernetes-version=v1.23.6
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220601113329-16804 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220601113329-16804 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker : (14.01581376s)
version_upgrade_test.go:286: *** TestKubernetesUpgrade FAILED at 2022-06-01 11:38:21.042369 -0700 PDT m=+2462.975468854
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-20220601113329-16804
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-20220601113329-16804:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e48b5ac93e1bd4ee9a2506d62fd21490c551cda0b3323e6354013604afebb182",
	        "Created": "2022-06-01T18:33:38.587396886Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 134586,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T18:37:44.670176106Z",
	            "FinishedAt": "2022-06-01T18:37:42.086155914Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/e48b5ac93e1bd4ee9a2506d62fd21490c551cda0b3323e6354013604afebb182/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e48b5ac93e1bd4ee9a2506d62fd21490c551cda0b3323e6354013604afebb182/hostname",
	        "HostsPath": "/var/lib/docker/containers/e48b5ac93e1bd4ee9a2506d62fd21490c551cda0b3323e6354013604afebb182/hosts",
	        "LogPath": "/var/lib/docker/containers/e48b5ac93e1bd4ee9a2506d62fd21490c551cda0b3323e6354013604afebb182/e48b5ac93e1bd4ee9a2506d62fd21490c551cda0b3323e6354013604afebb182-json.log",
	        "Name": "/kubernetes-upgrade-20220601113329-16804",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-20220601113329-16804:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-20220601113329-16804",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fa6cc6f7609cf6cb30f7f2ea0d944a9e10c31eef4923ab03ff8493f40c8dce25-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fa6cc6f7609cf6cb30f7f2ea0d944a9e10c31eef4923ab03ff8493f40c8dce25/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fa6cc6f7609cf6cb30f7f2ea0d944a9e10c31eef4923ab03ff8493f40c8dce25/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fa6cc6f7609cf6cb30f7f2ea0d944a9e10c31eef4923ab03ff8493f40c8dce25/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-20220601113329-16804",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-20220601113329-16804/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-20220601113329-16804",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-20220601113329-16804",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-20220601113329-16804",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "69febb10ddd434e5fa47b86ca4ed27478708669bddf521b215511d3ebe652ea7",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53437"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53438"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53439"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53440"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53441"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/69febb10ddd4",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-20220601113329-16804": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e48b5ac93e1b",
	                        "kubernetes-upgrade-20220601113329-16804"
	                    ],
	                    "NetworkID": "130306cc5c79c14a21fed335260e64b3ec04f3e53b854f5b910bf39e5caaa731",
	                    "EndpointID": "18e5b40bfa734a02c086ae27ff45aa61cc115ea8f5e74ff32c25b0a7fa07abc1",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-20220601113329-16804 -n kubernetes-upgrade-20220601113329-16804

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-20220601113329-16804 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-20220601113329-16804 logs -n 25: (3.403024727s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------|-----------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                  Args                   |                 Profile                 |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------------|-----------------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p                                      | custom-flannel-20220601113005-16804     | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:30 PDT | 01 Jun 22 11:30 PDT |
	|         | custom-flannel-20220601113005-16804     |                                         |         |                |                     |                     |
	| start   | -p                                      | offline-docker-20220601113004-16804     | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:30 PDT | 01 Jun 22 11:30 PDT |
	|         | offline-docker-20220601113004-16804     |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=1                  |                                         |         |                |                     |                     |
	|         | --memory=2048 --wait=true               |                                         |         |                |                     |                     |
	|         | --driver=docker                         |                                         |         |                |                     |                     |
	| delete  | -p                                      | offline-docker-20220601113004-16804     | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:30 PDT | 01 Jun 22 11:30 PDT |
	|         | offline-docker-20220601113004-16804     |                                         |         |                |                     |                     |
	| start   | -p                                      | force-systemd-env-20220601113027-16804  | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:30 PDT | 01 Jun 22 11:30 PDT |
	|         | force-systemd-env-20220601113027-16804  |                                         |         |                |                     |                     |
	|         | --memory=2048 --alsologtostderr -v=5    |                                         |         |                |                     |                     |
	|         | --driver=docker                         |                                         |         |                |                     |                     |
	| ssh     | force-systemd-env-20220601113027-16804  | force-systemd-env-20220601113027-16804  | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:30 PDT | 01 Jun 22 11:30 PDT |
	|         | ssh docker info --format                |                                         |         |                |                     |                     |
	|         | {{.CgroupDriver}}                       |                                         |         |                |                     |                     |
	| delete  | -p                                      | force-systemd-env-20220601113027-16804  | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:30 PDT | 01 Jun 22 11:30 PDT |
	|         | force-systemd-env-20220601113027-16804  |                                         |         |                |                     |                     |
	| start   | -p                                      | force-systemd-flag-20220601113049-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:30 PDT | 01 Jun 22 11:31 PDT |
	|         | force-systemd-flag-20220601113049-16804 |                                         |         |                |                     |                     |
	|         | --memory=2048 --force-systemd           |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=5 --driver=docker  |                                         |         |                |                     |                     |
	| ssh     | force-systemd-flag-20220601113049-16804 | force-systemd-flag-20220601113049-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:31 PDT | 01 Jun 22 11:31 PDT |
	|         | ssh docker info --format                |                                         |         |                |                     |                     |
	|         | {{.CgroupDriver}}                       |                                         |         |                |                     |                     |
	| start   | -p                                      | docker-flags-20220601113057-16804       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:30 PDT | 01 Jun 22 11:31 PDT |
	|         | docker-flags-20220601113057-16804       |                                         |         |                |                     |                     |
	|         | --cache-images=false                    |                                         |         |                |                     |                     |
	|         | --memory=2048                           |                                         |         |                |                     |                     |
	|         | --install-addons=false                  |                                         |         |                |                     |                     |
	|         | --wait=false --docker-env=FOO=BAR       |                                         |         |                |                     |                     |
	|         | --docker-env=BAZ=BAT                    |                                         |         |                |                     |                     |
	|         | --docker-opt=debug                      |                                         |         |                |                     |                     |
	|         | --docker-opt=icc=true                   |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=5                  |                                         |         |                |                     |                     |
	|         | --driver=docker                         |                                         |         |                |                     |                     |
	| ssh     | docker-flags-20220601113057-16804       | docker-flags-20220601113057-16804       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:31 PDT | 01 Jun 22 11:31 PDT |
	|         | ssh sudo systemctl show docker          |                                         |         |                |                     |                     |
	|         | --property=Environment --no-pager       |                                         |         |                |                     |                     |
	| delete  | -p                                      | force-systemd-flag-20220601113049-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:31 PDT | 01 Jun 22 11:31 PDT |
	|         | force-systemd-flag-20220601113049-16804 |                                         |         |                |                     |                     |
	| ssh     | docker-flags-20220601113057-16804       | docker-flags-20220601113057-16804       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:31 PDT | 01 Jun 22 11:31 PDT |
	|         | ssh sudo systemctl show docker          |                                         |         |                |                     |                     |
	|         | --property=ExecStart --no-pager         |                                         |         |                |                     |                     |
	| delete  | -p                                      | docker-flags-20220601113057-16804       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:31 PDT | 01 Jun 22 11:31 PDT |
	|         | docker-flags-20220601113057-16804       |                                         |         |                |                     |                     |
	| start   | -p                                      | cert-options-20220601113126-16804       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:31 PDT | 01 Jun 22 11:31 PDT |
	|         | cert-options-20220601113126-16804       |                                         |         |                |                     |                     |
	|         | --memory=2048                           |                                         |         |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1               |                                         |         |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15           |                                         |         |                |                     |                     |
	|         | --apiserver-names=localhost             |                                         |         |                |                     |                     |
	|         | --apiserver-names=www.google.com        |                                         |         |                |                     |                     |
	|         | --apiserver-port=8555                   |                                         |         |                |                     |                     |
	|         | --driver=docker                         |                                         |         |                |                     |                     |
	|         | --apiserver-name=localhost              |                                         |         |                |                     |                     |
	| ssh     | cert-options-20220601113126-16804       | cert-options-20220601113126-16804       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:31 PDT | 01 Jun 22 11:31 PDT |
	|         | ssh openssl x509 -text -noout -in       |                                         |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt   |                                         |         |                |                     |                     |
	| ssh     | -p                                      | cert-options-20220601113126-16804       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:31 PDT | 01 Jun 22 11:31 PDT |
	|         | cert-options-20220601113126-16804       |                                         |         |                |                     |                     |
	|         | -- sudo cat                             |                                         |         |                |                     |                     |
	|         | /etc/kubernetes/admin.conf              |                                         |         |                |                     |                     |
	| delete  | -p                                      | cert-options-20220601113126-16804       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:31 PDT | 01 Jun 22 11:31 PDT |
	|         | cert-options-20220601113126-16804       |                                         |         |                |                     |                     |
	| delete  | -p                                      | running-upgrade-20220601113155-16804    | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:32 PDT | 01 Jun 22 11:32 PDT |
	|         | running-upgrade-20220601113155-16804    |                                         |         |                |                     |                     |
	| delete  | -p                                      | missing-upgrade-20220601113242-16804    | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:33 PDT | 01 Jun 22 11:33 PDT |
	|         | missing-upgrade-20220601113242-16804    |                                         |         |                |                     |                     |
	| start   | -p                                      | cert-expiration-20220601113122-16804    | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:31 PDT | 01 Jun 22 11:35 PDT |
	|         | cert-expiration-20220601113122-16804    |                                         |         |                |                     |                     |
	|         | --memory=2048 --cert-expiration=3m      |                                         |         |                |                     |                     |
	|         | --driver=docker                         |                                         |         |                |                     |                     |
	| stop    | -p                                      | kubernetes-upgrade-20220601113329-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:37 PDT | 01 Jun 22 11:37 PDT |
	|         | kubernetes-upgrade-20220601113329-16804 |                                         |         |                |                     |                     |
	| start   | -p                                      | kubernetes-upgrade-20220601113329-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:37 PDT | 01 Jun 22 11:38 PDT |
	|         | kubernetes-upgrade-20220601113329-16804 |                                         |         |                |                     |                     |
	|         | --memory=2200                           |                                         |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6            |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker  |                                         |         |                |                     |                     |
	| start   | -p                                      | cert-expiration-20220601113122-16804    | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:38 PDT | 01 Jun 22 11:38 PDT |
	|         | cert-expiration-20220601113122-16804    |                                         |         |                |                     |                     |
	|         | --memory=2048                           |                                         |         |                |                     |                     |
	|         | --cert-expiration=8760h                 |                                         |         |                |                     |                     |
	|         | --driver=docker                         |                                         |         |                |                     |                     |
	| delete  | -p                                      | cert-expiration-20220601113122-16804    | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:38 PDT | 01 Jun 22 11:38 PDT |
	|         | cert-expiration-20220601113122-16804    |                                         |         |                |                     |                     |
	| start   | -p                                      | kubernetes-upgrade-20220601113329-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:38 PDT | 01 Jun 22 11:38 PDT |
	|         | kubernetes-upgrade-20220601113329-16804 |                                         |         |                |                     |                     |
	|         | --memory=2200                           |                                         |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6            |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker  |                                         |         |                |                     |                     |
	|---------|-----------------------------------------|-----------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 11:38:11
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 11:38:11.783958   24862 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:38:11.784083   24862 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:38:11.784089   24862 out.go:309] Setting ErrFile to fd 2...
	I0601 11:38:11.784092   24862 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:38:11.784180   24862 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:38:11.784463   24862 out.go:303] Setting JSON to false
	I0601 11:38:11.799649   24862 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":7661,"bootTime":1654101030,"procs":348,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 11:38:11.799728   24862 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:38:11.824379   24862 out.go:177] * [cert-expiration-20220601113122-16804] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 11:38:11.871105   24862 notify.go:193] Checking for updates...
	I0601 11:38:11.892931   24862 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:38:11.913871   24862 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:38:11.971807   24862 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 11:38:12.030852   24862 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:38:12.051964   24862 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:38:10.644532   24809 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0601 11:38:10.644743   24809 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-20220601113329-16804 dig +short host.docker.internal
	I0601 11:38:10.789451   24809 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 11:38:10.789554   24809 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 11:38:10.793999   24809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601113329-16804
	I0601 11:38:10.867597   24809 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 11:38:10.867662   24809 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 11:38:10.898830   24809 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	k8s.gcr.io/coredns:1.6.2
	<none>:<none>
	
	-- /stdout --
	I0601 11:38:10.898847   24809 docker.go:541] Images already preloaded, skipping extraction
	I0601 11:38:10.898918   24809 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 11:38:10.929595   24809 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	k8s.gcr.io/coredns:1.6.2
	<none>:<none>
	
	-- /stdout --
	I0601 11:38:10.929622   24809 cache_images.go:84] Images are preloaded, skipping loading
	I0601 11:38:10.929742   24809 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 11:38:11.006351   24809 cni.go:95] Creating CNI manager for ""
	I0601 11:38:11.006366   24809 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:38:11.006379   24809 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 11:38:11.006391   24809 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20220601113329-16804 NodeName:kubernetes-upgrade-20220601113329-16804 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd
ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 11:38:11.006495   24809 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-20220601113329-16804"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 11:38:11.006562   24809 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-20220601113329-16804 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:kubernetes-upgrade-20220601113329-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 11:38:11.006619   24809 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 11:38:11.014301   24809 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 11:38:11.014355   24809 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 11:38:11.021307   24809 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I0601 11:38:11.033644   24809 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 11:38:11.046225   24809 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2060 bytes)
	I0601 11:38:11.059552   24809 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 11:38:11.063562   24809 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804 for IP: 192.168.49.2
	I0601 11:38:11.063692   24809 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 11:38:11.063742   24809 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 11:38:11.063818   24809 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/client.key
	I0601 11:38:11.063883   24809 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/apiserver.key.dd3b5fb2
	I0601 11:38:11.063932   24809 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/proxy-client.key
	I0601 11:38:11.064123   24809 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem (1338 bytes)
	W0601 11:38:11.064158   24809 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804_empty.pem, impossibly tiny 0 bytes
	I0601 11:38:11.064168   24809 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1675 bytes)
	I0601 11:38:11.064198   24809 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 11:38:11.064228   24809 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 11:38:11.064257   24809 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1675 bytes)
	I0601 11:38:11.064320   24809 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem (1708 bytes)
	I0601 11:38:11.064792   24809 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 11:38:11.081644   24809 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 11:38:11.098496   24809 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 11:38:11.115638   24809 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0601 11:38:11.132634   24809 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 11:38:11.152444   24809 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 11:38:11.171270   24809 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 11:38:11.189289   24809 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0601 11:38:11.208002   24809 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem --> /usr/share/ca-certificates/16804.pem (1338 bytes)
	I0601 11:38:11.225267   24809 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /usr/share/ca-certificates/168042.pem (1708 bytes)
	I0601 11:38:11.242004   24809 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 11:38:11.258923   24809 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 11:38:11.272057   24809 ssh_runner.go:195] Run: openssl version
	I0601 11:38:11.277365   24809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168042.pem && ln -fs /usr/share/ca-certificates/168042.pem /etc/ssl/certs/168042.pem"
	I0601 11:38:11.285585   24809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168042.pem
	I0601 11:38:11.289480   24809 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 18:01 /usr/share/ca-certificates/168042.pem
	I0601 11:38:11.289523   24809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168042.pem
	I0601 11:38:11.294970   24809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168042.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 11:38:11.302363   24809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 11:38:11.310473   24809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:38:11.314172   24809 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:38:11.314215   24809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:38:11.319279   24809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 11:38:11.326539   24809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16804.pem && ln -fs /usr/share/ca-certificates/16804.pem /etc/ssl/certs/16804.pem"
	I0601 11:38:11.334494   24809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16804.pem
	I0601 11:38:11.338298   24809 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 18:01 /usr/share/ca-certificates/16804.pem
	I0601 11:38:11.338335   24809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16804.pem
	I0601 11:38:11.343461   24809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16804.pem /etc/ssl/certs/51391683.0"
	I0601 11:38:11.350675   24809 kubeadm.go:395] StartCluster: {Name:kubernetes-upgrade-20220601113329-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:kubernetes-upgrade-20220601113329-16804 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false}
	I0601 11:38:11.350762   24809 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 11:38:11.380565   24809 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 11:38:11.388329   24809 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 11:38:11.388349   24809 kubeadm.go:626] restartCluster start
	I0601 11:38:11.388414   24809 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 11:38:11.395409   24809 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:38:11.395464   24809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601113329-16804
	I0601 11:38:11.467476   24809 kubeconfig.go:92] found "kubernetes-upgrade-20220601113329-16804" server: "https://127.0.0.1:53441"
	I0601 11:38:11.467994   24809 kapi.go:59] client config for kubernetes-upgrade-20220601113329-16804: &rest.Config{Host:"https://127.0.0.1:53441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kuber
netes-upgrade-20220601113329-16804/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22d2020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0601 11:38:11.468477   24809 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 11:38:11.476142   24809 api_server.go:165] Checking apiserver status ...
	I0601 11:38:11.476195   24809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:38:11.484810   24809 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1465/cgroup
	W0601 11:38:11.493364   24809 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1465/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:38:11.493374   24809 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53441/healthz ...
	I0601 11:38:11.501349   24809 api_server.go:266] https://127.0.0.1:53441/healthz returned 200:
	ok
	I0601 11:38:11.512004   24809 system_pods.go:86] 5 kube-system pods found
	I0601 11:38:11.512022   24809 system_pods.go:89] "etcd-kubernetes-upgrade-20220601113329-16804" [90890dff-8a32-4764-9d0f-c54fd8090f36] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0601 11:38:11.512030   24809 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-20220601113329-16804" [0650ac6d-ec25-4ffb-bec1-568754ccad63] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0601 11:38:11.512038   24809 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-20220601113329-16804" [8405e5d2-0128-4852-a468-ee6b68c038ee] Pending
	I0601 11:38:11.512045   24809 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-20220601113329-16804" [c94b4cda-7727-4348-945c-171235a460a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0601 11:38:11.512052   24809 system_pods.go:89] "storage-provisioner" [5968f455-dcc1-47a2-bc05-401a8cfd6978] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:38:11.512060   24809 kubeadm.go:610] needs reconfigure: missing components: kube-dns, kube-controller-manager, kube-proxy
	I0601 11:38:11.512066   24809 kubeadm.go:1092] stopping kube-system containers ...
	I0601 11:38:11.512130   24809 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 11:38:11.542567   24809 docker.go:442] Stopping containers: [c600f7b9b748 606f0400de99 17cee5359bff d1577f830c3d eff1a7ba385f e9a27477dd70 139c0c94fcdf ff070ae07e9a]
	I0601 11:38:11.542639   24809 ssh_runner.go:195] Run: docker stop c600f7b9b748 606f0400de99 17cee5359bff d1577f830c3d eff1a7ba385f e9a27477dd70 139c0c94fcdf ff070ae07e9a
	I0601 11:38:12.073264   24862 config.go:178] Loaded profile config "cert-expiration-20220601113122-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:38:12.073680   24862 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:38:12.146911   24862 docker.go:137] docker version: linux-20.10.14
	I0601 11:38:12.147019   24862 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:38:12.273887   24862 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:false NGoroutines:56 SystemTime:2022-06-01 18:38:12.212349187 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:38:12.295751   24862 out.go:177] * Using the docker driver based on existing profile
	I0601 11:38:12.337276   24862 start.go:284] selected driver: docker
	I0601 11:38:12.337362   24862 start.go:806] validating driver "docker" against &{Name:cert-expiration-20220601113122-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:cert-expiration-20220601113122-16804
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false}
	I0601 11:38:12.337456   24862 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:38:12.340107   24862 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:38:12.467136   24862 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:false NGoroutines:56 SystemTime:2022-06-01 18:38:12.405411278 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:38:12.467281   24862 cni.go:95] Creating CNI manager for ""
	I0601 11:38:12.467289   24862 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:38:12.467297   24862 start_flags.go:306] config:
	{Name:cert-expiration-20220601113122-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:cert-expiration-20220601113122-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNS
Domain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:38:12.509247   24862 out.go:177] * Starting control plane node cert-expiration-20220601113122-16804 in cluster cert-expiration-20220601113122-16804
	I0601 11:38:12.530292   24862 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:38:12.567673   24862 out.go:177] * Pulling base image ...
	I0601 11:38:12.610281   24862 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 11:38:12.610319   24862 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:38:12.610329   24862 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 11:38:12.610339   24862 cache.go:57] Caching tarball of preloaded images
	I0601 11:38:12.610440   24862 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:38:12.610453   24862 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 11:38:12.610940   24862 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cert-expiration-20220601113122-16804/config.json ...
	I0601 11:38:12.690269   24862 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 11:38:12.690297   24862 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 11:38:12.690308   24862 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:38:12.690371   24862 start.go:352] acquiring machines lock for cert-expiration-20220601113122-16804: {Name:mkc6b0d9e6e7d8114fae6f23e4d38eca638fe425 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:38:12.690506   24862 start.go:356] acquired machines lock for "cert-expiration-20220601113122-16804" in 111.398µs
	I0601 11:38:12.690527   24862 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:38:12.690543   24862 fix.go:55] fixHost starting: 
	I0601 11:38:12.690808   24862 cli_runner.go:164] Run: docker container inspect cert-expiration-20220601113122-16804 --format={{.State.Status}}
	I0601 11:38:12.768480   24862 fix.go:103] recreateIfNeeded on cert-expiration-20220601113122-16804: state=Running err=<nil>
	W0601 11:38:12.768532   24862 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 11:38:12.792349   24862 out.go:177] * Updating the running docker "cert-expiration-20220601113122-16804" container ...
	I0601 11:38:12.850773   24862 machine.go:88] provisioning docker machine ...
	I0601 11:38:12.850821   24862 ubuntu.go:169] provisioning hostname "cert-expiration-20220601113122-16804"
	I0601 11:38:12.851009   24862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220601113122-16804
	I0601 11:38:12.926844   24862 main.go:134] libmachine: Using SSH client type: native
	I0601 11:38:12.927046   24862 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53204 <nil> <nil>}
	I0601 11:38:12.927056   24862 main.go:134] libmachine: About to run SSH command:
	sudo hostname cert-expiration-20220601113122-16804 && echo "cert-expiration-20220601113122-16804" | sudo tee /etc/hostname
	I0601 11:38:13.058360   24862 main.go:134] libmachine: SSH cmd err, output: <nil>: cert-expiration-20220601113122-16804
	
	I0601 11:38:13.058430   24862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220601113122-16804
	I0601 11:38:13.131608   24862 main.go:134] libmachine: Using SSH client type: native
	I0601 11:38:13.131760   24862 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53204 <nil> <nil>}
	I0601 11:38:13.131775   24862 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-20220601113122-16804' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-20220601113122-16804/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-20220601113122-16804' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 11:38:13.255830   24862 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:38:13.255846   24862 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 11:38:13.255872   24862 ubuntu.go:177] setting up certificates
	I0601 11:38:13.255883   24862 provision.go:83] configureAuth start
	I0601 11:38:13.255952   24862 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-20220601113122-16804
	I0601 11:38:13.331308   24862 provision.go:138] copyHostCerts
	I0601 11:38:13.331382   24862 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 11:38:13.331388   24862 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 11:38:13.331488   24862 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 11:38:13.331684   24862 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 11:38:13.331690   24862 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 11:38:13.331747   24862 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 11:38:13.331888   24862 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 11:38:13.331892   24862 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 11:38:13.331950   24862 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1675 bytes)
	I0601 11:38:13.332062   24862 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-20220601113122-16804 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube cert-expiration-20220601113122-16804]
	I0601 11:38:13.421860   24862 provision.go:172] copyRemoteCerts
	I0601 11:38:13.421913   24862 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 11:38:13.421955   24862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220601113122-16804
	I0601 11:38:13.496526   24862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53204 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/cert-expiration-20220601113122-16804/id_rsa Username:docker}
	I0601 11:38:13.580573   24862 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 11:38:13.597861   24862 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0601 11:38:13.615413   24862 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 11:38:13.636164   24862 provision.go:86] duration metric: configureAuth took 380.256873ms
	I0601 11:38:13.636173   24862 ubuntu.go:193] setting minikube options for container-runtime
	I0601 11:38:13.636331   24862 config.go:178] Loaded profile config "cert-expiration-20220601113122-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:38:13.636382   24862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220601113122-16804
	I0601 11:38:13.710933   24862 main.go:134] libmachine: Using SSH client type: native
	I0601 11:38:13.711109   24862 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53204 <nil> <nil>}
	I0601 11:38:13.711116   24862 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 11:38:13.827075   24862 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 11:38:13.827087   24862 ubuntu.go:71] root file system type: overlay
	I0601 11:38:13.827283   24862 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 11:38:13.827352   24862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220601113122-16804
	I0601 11:38:13.904064   24862 main.go:134] libmachine: Using SSH client type: native
	I0601 11:38:13.904202   24862 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53204 <nil> <nil>}
	I0601 11:38:13.904244   24862 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 11:38:14.031539   24862 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 11:38:14.031694   24862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220601113122-16804
	I0601 11:38:14.111853   24862 main.go:134] libmachine: Using SSH client type: native
	I0601 11:38:14.112048   24862 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 53204 <nil> <nil>}
	I0601 11:38:14.112058   24862 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 11:38:14.235711   24862 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:38:14.235721   24862 machine.go:91] provisioned docker machine in 1.384899853s
	I0601 11:38:14.235730   24862 start.go:306] post-start starting for "cert-expiration-20220601113122-16804" (driver="docker")
	I0601 11:38:14.235733   24862 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 11:38:14.235805   24862 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 11:38:14.235851   24862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220601113122-16804
	I0601 11:38:14.308189   24862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53204 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/cert-expiration-20220601113122-16804/id_rsa Username:docker}
	I0601 11:38:14.395921   24862 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 11:38:14.399887   24862 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 11:38:14.399900   24862 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 11:38:14.399905   24862 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 11:38:14.399908   24862 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 11:38:14.399920   24862 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 11:38:14.400042   24862 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 11:38:14.400178   24862 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem -> 168042.pem in /etc/ssl/certs
	I0601 11:38:14.400321   24862 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 11:38:14.408260   24862 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /etc/ssl/certs/168042.pem (1708 bytes)
	I0601 11:38:14.428424   24862 start.go:309] post-start completed in 192.678307ms
	I0601 11:38:14.428502   24862 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:38:14.428558   24862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220601113122-16804
	I0601 11:38:14.500766   24862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53204 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/cert-expiration-20220601113122-16804/id_rsa Username:docker}
	I0601 11:38:14.584983   24862 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:38:14.590551   24862 fix.go:57] fixHost completed within 1.899955237s
	I0601 11:38:14.590563   24862 start.go:81] releasing machines lock for "cert-expiration-20220601113122-16804", held for 1.899998696s
	I0601 11:38:14.590651   24862 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-20220601113122-16804
	I0601 11:38:14.661192   24862 ssh_runner.go:195] Run: systemctl --version
	I0601 11:38:14.661213   24862 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 11:38:14.661248   24862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220601113122-16804
	I0601 11:38:14.661275   24862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220601113122-16804
	I0601 11:38:14.741343   24862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53204 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/cert-expiration-20220601113122-16804/id_rsa Username:docker}
	I0601 11:38:14.743623   24862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53204 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/cert-expiration-20220601113122-16804/id_rsa Username:docker}
	I0601 11:38:14.962318   24862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 11:38:14.974951   24862 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 11:38:14.988307   24862 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 11:38:14.988360   24862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 11:38:15.000729   24862 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 11:38:15.014175   24862 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 11:38:15.119939   24862 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 11:38:15.233408   24862 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 11:38:15.246056   24862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 11:38:15.349774   24862 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 11:38:15.363204   24862 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 11:38:15.406014   24862 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 11:38:15.487471   24862 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0601 11:38:15.487582   24862 cli_runner.go:164] Run: docker exec -t cert-expiration-20220601113122-16804 dig +short host.docker.internal
	I0601 11:38:15.630343   24862 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 11:38:15.630458   24862 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 11:38:15.635373   24862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cert-expiration-20220601113122-16804
	I0601 11:38:15.711499   24862 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 11:38:15.711550   24862 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 11:38:15.743429   24862 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0601 11:38:15.743445   24862 docker.go:541] Images already preloaded, skipping extraction
	I0601 11:38:15.743511   24862 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 11:38:15.779228   24862 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0601 11:38:15.779266   24862 cache_images.go:84] Images are preloaded, skipping loading
	I0601 11:38:15.779374   24862 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 11:38:15.883053   24862 cni.go:95] Creating CNI manager for ""
	I0601 11:38:15.883061   24862 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:38:15.883076   24862 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 11:38:15.883088   24862 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-20220601113122-16804 NodeName:cert-expiration-20220601113122-16804 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 11:38:15.883250   24862 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "cert-expiration-20220601113122-16804"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 11:38:15.883367   24862 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=cert-expiration-20220601113122-16804 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:cert-expiration-20220601113122-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 11:38:15.883431   24862 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 11:38:15.892387   24862 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 11:38:15.892446   24862 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 11:38:15.903255   24862 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0601 11:38:15.917075   24862 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 11:38:15.932552   24862 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2057 bytes)
	I0601 11:38:15.947695   24862 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0601 11:38:15.952566   24862 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cert-expiration-20220601113122-16804 for IP: 192.168.58.2
	I0601 11:38:15.952673   24862 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 11:38:15.952720   24862 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	W0601 11:38:15.952871   24862 out.go:239] ! Certificate client.crt has expired. Generating a new one...
	I0601 11:38:15.952885   24862 certs.go:527] cert expired /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cert-expiration-20220601113122-16804/client.crt: expiration: 2022-06-01 18:38:00 +0000 UTC, now: 2022-06-01 11:38:15.952879 -0700 PDT m=+4.215938230
	I0601 11:38:15.953066   24862 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cert-expiration-20220601113122-16804/client.key
	I0601 11:38:15.953095   24862 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cert-expiration-20220601113122-16804/client.crt with IP's: []
	I0601 11:38:16.207954   24862 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cert-expiration-20220601113122-16804/client.crt ...
	I0601 11:38:16.207963   24862 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cert-expiration-20220601113122-16804/client.crt: {Name:mk9c134dd2d8b765fec6fa48eb2257d1066dfbab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:38:16.208227   24862 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cert-expiration-20220601113122-16804/client.key ...
	I0601 11:38:16.208232   24862 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cert-expiration-20220601113122-16804/client.key: {Name:mk0bc5046e8dc84ca96f387a00e278b3ab216ae7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0601 11:38:16.208484   24862 out.go:239] ! Certificate apiserver.crt.cee25041 has expired. Generating a new one...
	I0601 11:38:16.208496   24862 certs.go:527] cert expired /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cert-expiration-20220601113122-16804/apiserver.crt.cee25041: expiration: 2022-06-01 18:38:00 +0000 UTC, now: 2022-06-01 11:38:16.208492 -0700 PDT m=+4.471544637
	I0601 11:38:16.208640   24862 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cert-expiration-20220601113122-16804/apiserver.key.cee25041
	I0601 11:38:16.208664   24862 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cert-expiration-20220601113122-16804/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0601 11:38:16.265208   24862 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cert-expiration-20220601113122-16804/apiserver.crt.cee25041 ...
	I0601 11:38:16.265215   24862 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cert-expiration-20220601113122-16804/apiserver.crt.cee25041: {Name:mkcb9b50e895960524c9118517b1083847f93da1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:38:16.265432   24862 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cert-expiration-20220601113122-16804/apiserver.key.cee25041 ...
	I0601 11:38:16.265442   24862 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cert-expiration-20220601113122-16804/apiserver.key.cee25041: {Name:mk59409629447fb21f926908922c75052fa02a27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:38:16.265582   24862 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cert-expiration-20220601113122-16804/apiserver.crt.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cert-expiration-20220601113122-16804/apiserver.crt
	I0601 11:38:16.265784   24862 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cert-expiration-20220601113122-16804/apiserver.key.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cert-expiration-20220601113122-16804/apiserver.key
	W0601 11:38:16.266038   24862 out.go:239] ! Certificate proxy-client.crt has expired. Generating a new one...
	I0601 11:38:16.266047   24862 certs.go:527] cert expired /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cert-expiration-20220601113122-16804/proxy-client.crt: expiration: 2022-06-01 18:38:00 +0000 UTC, now: 2022-06-01 11:38:16.266044 -0700 PDT m=+4.529094842
	I0601 11:38:16.266162   24862 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cert-expiration-20220601113122-16804/proxy-client.key
	I0601 11:38:16.266179   24862 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cert-expiration-20220601113122-16804/proxy-client.crt with IP's: []
	I0601 11:38:16.422646   24862 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cert-expiration-20220601113122-16804/proxy-client.crt ...
	I0601 11:38:16.422657   24862 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cert-expiration-20220601113122-16804/proxy-client.crt: {Name:mk37db035baee0dab042c8e786258fae794872b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:38:16.434233   24862 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cert-expiration-20220601113122-16804/proxy-client.key ...
	I0601 11:38:16.434240   24862 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cert-expiration-20220601113122-16804/proxy-client.key: {Name:mka2d499106de85945a2f9b8f6fe55d9e8d4541e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:38:16.434587   24862 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem (1338 bytes)
	W0601 11:38:16.434622   24862 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804_empty.pem, impossibly tiny 0 bytes
	I0601 11:38:16.434631   24862 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1675 bytes)
	I0601 11:38:16.434657   24862 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 11:38:16.434683   24862 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 11:38:16.434708   24862 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1675 bytes)
	I0601 11:38:16.434768   24862 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem (1708 bytes)
	I0601 11:38:16.435200   24862 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cert-expiration-20220601113122-16804/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 11:38:16.457971   24862 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cert-expiration-20220601113122-16804/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0601 11:38:16.480682   24862 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cert-expiration-20220601113122-16804/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 11:38:16.507579   24862 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cert-expiration-20220601113122-16804/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0601 11:38:16.531328   24862 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 11:38:16.555403   24862 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 11:38:16.582735   24862 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 11:38:16.606581   24862 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0601 11:38:16.627895   24862 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 11:38:16.649881   24862 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem --> /usr/share/ca-certificates/16804.pem (1338 bytes)
	I0601 11:38:16.672682   24862 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /usr/share/ca-certificates/168042.pem (1708 bytes)
	I0601 11:38:16.696845   24862 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 11:38:16.711837   24862 ssh_runner.go:195] Run: openssl version
	I0601 11:38:16.717462   24862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 11:38:16.727573   24862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:38:16.733054   24862 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:38:16.733099   24862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:38:16.739692   24862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 11:38:16.747514   24862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16804.pem && ln -fs /usr/share/ca-certificates/16804.pem /etc/ssl/certs/16804.pem"
	I0601 11:38:16.756146   24862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16804.pem
	I0601 11:38:16.760598   24862 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 18:01 /usr/share/ca-certificates/16804.pem
	I0601 11:38:16.760664   24862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16804.pem
	I0601 11:38:16.766243   24862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16804.pem /etc/ssl/certs/51391683.0"
	I0601 11:38:16.775380   24862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168042.pem && ln -fs /usr/share/ca-certificates/168042.pem /etc/ssl/certs/168042.pem"
	I0601 11:38:16.785857   24862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168042.pem
	I0601 11:38:16.812860   24862 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 18:01 /usr/share/ca-certificates/168042.pem
	I0601 11:38:16.812924   24862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168042.pem
	I0601 11:38:16.819104   24862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168042.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 11:38:16.827205   24862 kubeadm.go:395] StartCluster: {Name:cert-expiration-20220601113122-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:cert-expiration-20220601113122-16804 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false}
	I0601 11:38:16.827301   24862 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 11:38:16.864086   24862 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 11:38:16.872416   24862 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 11:38:16.872432   24862 kubeadm.go:626] restartCluster start
	I0601 11:38:16.872490   24862 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 11:38:16.879817   24862 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:38:16.879880   24862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cert-expiration-20220601113122-16804
	I0601 11:38:16.959277   24862 kubeconfig.go:92] found "cert-expiration-20220601113122-16804" server: "https://127.0.0.1:53203"
	I0601 11:38:16.960290   24862 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 11:38:16.968671   24862 api_server.go:165] Checking apiserver status ...
	I0601 11:38:16.968743   24862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:38:16.980137   24862 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1637/cgroup
	W0601 11:38:16.989418   24862 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1637/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:38:16.989433   24862 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53203/healthz ...
	I0601 11:38:16.995112   24862 api_server.go:266] https://127.0.0.1:53203/healthz returned 200:
	ok
	I0601 11:38:17.006378   24862 system_pods.go:86] 7 kube-system pods found
	I0601 11:38:17.006387   24862 system_pods.go:89] "coredns-64897985d-tl42h" [1d693973-0097-4ef5-ad2a-0fe8e296ffec] Running
	I0601 11:38:17.006390   24862 system_pods.go:89] "etcd-cert-expiration-20220601113122-16804" [3cbf8b6b-a39c-4f72-9a08-20521c71a732] Running
	I0601 11:38:17.006393   24862 system_pods.go:89] "kube-apiserver-cert-expiration-20220601113122-16804" [6b81040e-47ca-40df-b355-9cdf2245a221] Running
	I0601 11:38:17.006396   24862 system_pods.go:89] "kube-controller-manager-cert-expiration-20220601113122-16804" [8f2567de-8abc-4c47-a190-57c8ddcdbe35] Running
	I0601 11:38:17.006398   24862 system_pods.go:89] "kube-proxy-w77gp" [5b4e9ad1-64bc-4ccc-8be7-05b29814526d] Running
	I0601 11:38:17.006400   24862 system_pods.go:89] "kube-scheduler-cert-expiration-20220601113122-16804" [6535e49d-3806-4ffe-90fc-825f957366a6] Running
	I0601 11:38:17.006403   24862 system_pods.go:89] "storage-provisioner" [15b29122-7e21-4aac-b6db-52fa4c6d3edb] Running
	I0601 11:38:17.007698   24862 api_server.go:140] control plane version: v1.23.6
	I0601 11:38:17.007704   24862 kubeadm.go:620] The running cluster does not require reconfiguration: 127.0.0.1
	I0601 11:38:17.007708   24862 kubeadm.go:674] Taking a shortcut, as the cluster seems to be properly configured
	I0601 11:38:17.007713   24862 kubeadm.go:630] restartCluster took 135.275083ms
	I0601 11:38:17.007716   24862 kubeadm.go:397] StartCluster complete in 180.514379ms
	I0601 11:38:17.007724   24862 settings.go:142] acquiring lock: {Name:mk630944d7da2d6f5ad8bc7bd2a815aad6529f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:38:17.007795   24862 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:38:17.008449   24862 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk924f4ba24fa74a0cb052299e0cc4e825b209a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:38:17.012125   24862 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "cert-expiration-20220601113122-16804" rescaled to 1
	I0601 11:38:17.012161   24862 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 11:38:17.012185   24862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 11:38:17.053685   24862 out.go:177] * Verifying Kubernetes components...
	I0601 11:38:12.764420   24809 ssh_runner.go:235] Completed: docker stop c600f7b9b748 606f0400de99 17cee5359bff d1577f830c3d eff1a7ba385f e9a27477dd70 139c0c94fcdf ff070ae07e9a: (1.221712681s)
	I0601 11:38:12.764489   24809 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 11:38:12.879647   24809 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:38:12.888251   24809 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5763 Jun  1 18:35 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5795 Jun  1 18:35 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5959 Jun  1 18:35 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5743 Jun  1 18:35 /etc/kubernetes/scheduler.conf
	
	I0601 11:38:12.888308   24809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0601 11:38:12.896880   24809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0601 11:38:12.905048   24809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0601 11:38:12.914641   24809 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0601 11:38:12.923192   24809 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:38:12.931022   24809 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 11:38:12.931036   24809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:38:12.979259   24809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:38:13.815468   24809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:38:13.963565   24809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:38:14.015305   24809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:38:14.073009   24809 api_server.go:51] waiting for apiserver process to appear ...
	I0601 11:38:14.073089   24809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:38:14.584536   24809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:38:15.084689   24809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:38:15.584679   24809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:38:15.596637   24809 api_server.go:71] duration metric: took 1.523581267s to wait for apiserver process to appear ...
	I0601 11:38:15.596656   24809 api_server.go:87] waiting for apiserver healthz status ...
	I0601 11:38:15.596668   24809 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53441/healthz ...
	I0601 11:38:17.012240   24862 addons.go:415] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	I0601 11:38:17.012338   24862 config.go:178] Loaded profile config "cert-expiration-20220601113122-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:38:17.075003   24862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:38:17.075013   24862 addons.go:65] Setting storage-provisioner=true in profile "cert-expiration-20220601113122-16804"
	I0601 11:38:17.075011   24862 addons.go:65] Setting default-storageclass=true in profile "cert-expiration-20220601113122-16804"
	I0601 11:38:17.075035   24862 addons.go:153] Setting addon storage-provisioner=true in "cert-expiration-20220601113122-16804"
	W0601 11:38:17.075041   24862 addons.go:165] addon storage-provisioner should already be in state true
	I0601 11:38:17.075065   24862 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-20220601113122-16804"
	I0601 11:38:17.075077   24862 host.go:66] Checking if "cert-expiration-20220601113122-16804" exists ...
	I0601 11:38:17.075357   24862 cli_runner.go:164] Run: docker container inspect cert-expiration-20220601113122-16804 --format={{.State.Status}}
	I0601 11:38:17.075401   24862 cli_runner.go:164] Run: docker container inspect cert-expiration-20220601113122-16804 --format={{.State.Status}}
	I0601 11:38:17.128011   24862 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0601 11:38:17.128057   24862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cert-expiration-20220601113122-16804
	I0601 11:38:17.180222   24862 addons.go:153] Setting addon default-storageclass=true in "cert-expiration-20220601113122-16804"
	I0601 11:38:17.193739   24862 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0601 11:38:17.193752   24862 addons.go:165] addon default-storageclass should already be in state true
	I0601 11:38:17.215032   24862 host.go:66] Checking if "cert-expiration-20220601113122-16804" exists ...
	I0601 11:38:17.215092   24862 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:38:17.215098   24862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 11:38:17.215158   24862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220601113122-16804
	I0601 11:38:17.216355   24862 cli_runner.go:164] Run: docker container inspect cert-expiration-20220601113122-16804 --format={{.State.Status}}
	I0601 11:38:17.237885   24862 api_server.go:51] waiting for apiserver process to appear ...
	I0601 11:38:17.237996   24862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:38:17.251683   24862 api_server.go:71] duration metric: took 239.497968ms to wait for apiserver process to appear ...
	I0601 11:38:17.251705   24862 api_server.go:87] waiting for apiserver healthz status ...
	I0601 11:38:17.251716   24862 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53203/healthz ...
	I0601 11:38:17.260665   24862 api_server.go:266] https://127.0.0.1:53203/healthz returned 200:
	ok
	I0601 11:38:17.262097   24862 api_server.go:140] control plane version: v1.23.6
	I0601 11:38:17.262103   24862 api_server.go:130] duration metric: took 10.393784ms to wait for apiserver health ...
	I0601 11:38:17.262110   24862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 11:38:17.268190   24862 system_pods.go:59] 7 kube-system pods found
	I0601 11:38:17.268200   24862 system_pods.go:61] "coredns-64897985d-tl42h" [1d693973-0097-4ef5-ad2a-0fe8e296ffec] Running
	I0601 11:38:17.268203   24862 system_pods.go:61] "etcd-cert-expiration-20220601113122-16804" [3cbf8b6b-a39c-4f72-9a08-20521c71a732] Running
	I0601 11:38:17.268206   24862 system_pods.go:61] "kube-apiserver-cert-expiration-20220601113122-16804" [6b81040e-47ca-40df-b355-9cdf2245a221] Running
	I0601 11:38:17.268208   24862 system_pods.go:61] "kube-controller-manager-cert-expiration-20220601113122-16804" [8f2567de-8abc-4c47-a190-57c8ddcdbe35] Running
	I0601 11:38:17.268211   24862 system_pods.go:61] "kube-proxy-w77gp" [5b4e9ad1-64bc-4ccc-8be7-05b29814526d] Running
	I0601 11:38:17.268213   24862 system_pods.go:61] "kube-scheduler-cert-expiration-20220601113122-16804" [6535e49d-3806-4ffe-90fc-825f957366a6] Running
	I0601 11:38:17.268215   24862 system_pods.go:61] "storage-provisioner" [15b29122-7e21-4aac-b6db-52fa4c6d3edb] Running
	I0601 11:38:17.268218   24862 system_pods.go:74] duration metric: took 6.105418ms to wait for pod list to return data ...
	I0601 11:38:17.268223   24862 kubeadm.go:572] duration metric: took 256.043349ms to wait for : map[apiserver:true system_pods:true] ...
	I0601 11:38:17.268233   24862 node_conditions.go:102] verifying NodePressure condition ...
	I0601 11:38:17.272393   24862 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 11:38:17.272406   24862 node_conditions.go:123] node cpu capacity is 6
	I0601 11:38:17.272418   24862 node_conditions.go:105] duration metric: took 4.182365ms to run NodePressure ...
	I0601 11:38:17.272424   24862 start.go:213] waiting for startup goroutines ...
	I0601 11:38:17.311056   24862 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 11:38:17.311063   24862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 11:38:17.311112   24862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220601113122-16804
	I0601 11:38:17.314121   24862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53204 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/cert-expiration-20220601113122-16804/id_rsa Username:docker}
	I0601 11:38:17.391304   24862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53204 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/cert-expiration-20220601113122-16804/id_rsa Username:docker}
	I0601 11:38:17.419622   24862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:38:17.494487   24862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 11:38:17.670019   24862 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0601 11:38:17.689668   24862 addons.go:417] enableAddons completed in 677.429004ms
	I0601 11:38:17.720662   24862 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0601 11:38:17.741657   24862 out.go:177] * Done! kubectl is now configured to use "cert-expiration-20220601113122-16804" cluster and "default" namespace by default
	I0601 11:38:18.155763   24809 api_server.go:266] https://127.0.0.1:53441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0601 11:38:18.155797   24809 api_server.go:102] status: https://127.0.0.1:53441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0601 11:38:18.656077   24809 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53441/healthz ...
	I0601 11:38:18.661458   24809 api_server.go:266] https://127.0.0.1:53441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 11:38:18.661470   24809 api_server.go:102] status: https://127.0.0.1:53441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 11:38:19.155916   24809 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53441/healthz ...
	I0601 11:38:19.163921   24809 api_server.go:266] https://127.0.0.1:53441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 11:38:19.163940   24809 api_server.go:102] status: https://127.0.0.1:53441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 11:38:19.656313   24809 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53441/healthz ...
	I0601 11:38:19.661419   24809 api_server.go:266] https://127.0.0.1:53441/healthz returned 200:
	ok
	I0601 11:38:19.667500   24809 api_server.go:140] control plane version: v1.23.6
	I0601 11:38:19.667510   24809 api_server.go:130] duration metric: took 4.070734221s to wait for apiserver health ...
	I0601 11:38:19.667518   24809 cni.go:95] Creating CNI manager for ""
	I0601 11:38:19.667522   24809 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:38:19.667530   24809 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 11:38:19.672148   24809 system_pods.go:59] 5 kube-system pods found
	I0601 11:38:19.672163   24809 system_pods.go:61] "etcd-kubernetes-upgrade-20220601113329-16804" [90890dff-8a32-4764-9d0f-c54fd8090f36] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0601 11:38:19.672170   24809 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-20220601113329-16804" [0650ac6d-ec25-4ffb-bec1-568754ccad63] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0601 11:38:19.672177   24809 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-20220601113329-16804" [8405e5d2-0128-4852-a468-ee6b68c038ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0601 11:38:19.672182   24809 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-20220601113329-16804" [c94b4cda-7727-4348-945c-171235a460a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0601 11:38:19.672188   24809 system_pods.go:61] "storage-provisioner" [5968f455-dcc1-47a2-bc05-401a8cfd6978] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:38:19.672194   24809 system_pods.go:74] duration metric: took 4.658816ms to wait for pod list to return data ...
	I0601 11:38:19.672200   24809 node_conditions.go:102] verifying NodePressure condition ...
	I0601 11:38:19.674635   24809 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 11:38:19.674649   24809 node_conditions.go:123] node cpu capacity is 6
	I0601 11:38:19.674660   24809 node_conditions.go:105] duration metric: took 2.45644ms to run NodePressure ...
	I0601 11:38:19.674671   24809 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:38:19.802391   24809 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 11:38:19.811192   24809 ops.go:34] apiserver oom_adj: -16
	I0601 11:38:19.811206   24809 kubeadm.go:630] restartCluster took 8.422611677s
	I0601 11:38:19.811215   24809 kubeadm.go:397] StartCluster complete in 8.460308574s
	I0601 11:38:19.811232   24809 settings.go:142] acquiring lock: {Name:mk630944d7da2d6f5ad8bc7bd2a815aad6529f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:38:19.811312   24809 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:38:19.812029   24809 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk924f4ba24fa74a0cb052299e0cc4e825b209a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:38:19.812735   24809 kapi.go:59] client config for kubernetes-upgrade-20220601113329-16804: &rest.Config{Host:"https://127.0.0.1:53441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kuber
netes-upgrade-20220601113329-16804/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22d2020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0601 11:38:19.815788   24809 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kubernetes-upgrade-20220601113329-16804" rescaled to 1
	I0601 11:38:19.815826   24809 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 11:38:19.815833   24809 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 11:38:19.815872   24809 addons.go:415] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	I0601 11:38:19.840919   24809 out.go:177] * Verifying Kubernetes components...
	I0601 11:38:19.816019   24809 config.go:178] Loaded profile config "kubernetes-upgrade-20220601113329-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:38:19.841004   24809 addons.go:65] Setting storage-provisioner=true in profile "kubernetes-upgrade-20220601113329-16804"
	I0601 11:38:19.841016   24809 addons.go:65] Setting default-storageclass=true in profile "kubernetes-upgrade-20220601113329-16804"
	I0601 11:38:19.870829   24809 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0601 11:38:19.897784   24809 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-20220601113329-16804"
	I0601 11:38:19.897793   24809 addons.go:153] Setting addon storage-provisioner=true in "kubernetes-upgrade-20220601113329-16804"
	I0601 11:38:19.897814   24809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	W0601 11:38:19.897815   24809 addons.go:165] addon storage-provisioner should already be in state true
	I0601 11:38:19.897916   24809 host.go:66] Checking if "kubernetes-upgrade-20220601113329-16804" exists ...
	I0601 11:38:19.898320   24809 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601113329-16804 --format={{.State.Status}}
	I0601 11:38:19.898511   24809 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601113329-16804 --format={{.State.Status}}
	I0601 11:38:19.913093   24809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601113329-16804
	I0601 11:38:19.986630   24809 kapi.go:59] client config for kubernetes-upgrade-20220601113329-16804: &rest.Config{Host:"https://127.0.0.1:53441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubernetes-upgrade-20220601113329-16804/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kuber
netes-upgrade-20220601113329-16804/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22d2020), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0601 11:38:19.994611   24809 addons.go:153] Setting addon default-storageclass=true in "kubernetes-upgrade-20220601113329-16804"
	I0601 11:38:20.009665   24809 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0601 11:38:20.009685   24809 addons.go:165] addon default-storageclass should already be in state true
	I0601 11:38:20.030201   24809 host.go:66] Checking if "kubernetes-upgrade-20220601113329-16804" exists ...
	I0601 11:38:20.030242   24809 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:38:20.030250   24809 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 11:38:20.030316   24809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601113329-16804
	I0601 11:38:20.031642   24809 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220601113329-16804 --format={{.State.Status}}
	I0601 11:38:20.037706   24809 api_server.go:51] waiting for apiserver process to appear ...
	I0601 11:38:20.037901   24809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:38:20.050521   24809 api_server.go:71] duration metric: took 234.668746ms to wait for apiserver process to appear ...
	I0601 11:38:20.050579   24809 api_server.go:87] waiting for apiserver healthz status ...
	I0601 11:38:20.050593   24809 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:53441/healthz ...
	I0601 11:38:20.058358   24809 api_server.go:266] https://127.0.0.1:53441/healthz returned 200:
	ok
	I0601 11:38:20.060421   24809 api_server.go:140] control plane version: v1.23.6
	I0601 11:38:20.060434   24809 api_server.go:130] duration metric: took 9.848213ms to wait for apiserver health ...
	I0601 11:38:20.060440   24809 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 11:38:20.068106   24809 system_pods.go:59] 5 kube-system pods found
	I0601 11:38:20.068127   24809 system_pods.go:61] "etcd-kubernetes-upgrade-20220601113329-16804" [90890dff-8a32-4764-9d0f-c54fd8090f36] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0601 11:38:20.068135   24809 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-20220601113329-16804" [0650ac6d-ec25-4ffb-bec1-568754ccad63] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0601 11:38:20.068141   24809 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-20220601113329-16804" [8405e5d2-0128-4852-a468-ee6b68c038ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0601 11:38:20.068149   24809 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-20220601113329-16804" [c94b4cda-7727-4348-945c-171235a460a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0601 11:38:20.068159   24809 system_pods.go:61] "storage-provisioner" [5968f455-dcc1-47a2-bc05-401a8cfd6978] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0601 11:38:20.068165   24809 system_pods.go:74] duration metric: took 7.721338ms to wait for pod list to return data ...
	I0601 11:38:20.068171   24809 kubeadm.go:572] duration metric: took 252.32433ms to wait for : map[apiserver:true system_pods:true] ...
	I0601 11:38:20.068180   24809 node_conditions.go:102] verifying NodePressure condition ...
	I0601 11:38:20.071515   24809 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 11:38:20.071532   24809 node_conditions.go:123] node cpu capacity is 6
	I0601 11:38:20.071554   24809 node_conditions.go:105] duration metric: took 3.369198ms to run NodePressure ...
	I0601 11:38:20.071566   24809 start.go:213] waiting for startup goroutines ...
	I0601 11:38:20.115067   24809 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53437 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/kubernetes-upgrade-20220601113329-16804/id_rsa Username:docker}
	I0601 11:38:20.118727   24809 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 11:38:20.118742   24809 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 11:38:20.118859   24809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220601113329-16804
	I0601 11:38:20.196554   24809 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53437 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/kubernetes-upgrade-20220601113329-16804/id_rsa Username:docker}
	I0601 11:38:20.211448   24809 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:38:20.293859   24809 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 11:38:20.870476   24809 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0601 11:38:20.891855   24809 addons.go:417] enableAddons completed in 1.07594014s
	I0601 11:38:20.922459   24809 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0601 11:38:20.960727   24809 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-20220601113329-16804" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-01 18:37:44 UTC, end at Wed 2022-06-01 18:38:22 UTC. --
	Jun 01 18:37:56 kubernetes-upgrade-20220601113329-16804 dockerd[525]: time="2022-06-01T18:37:56.264874505Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 01 18:37:56 kubernetes-upgrade-20220601113329-16804 dockerd[525]: time="2022-06-01T18:37:56.264908334Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 01 18:37:56 kubernetes-upgrade-20220601113329-16804 dockerd[525]: time="2022-06-01T18:37:56.264934476Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 01 18:37:56 kubernetes-upgrade-20220601113329-16804 dockerd[525]: time="2022-06-01T18:37:56.264943996Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 01 18:37:56 kubernetes-upgrade-20220601113329-16804 dockerd[525]: time="2022-06-01T18:37:56.265792835Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 01 18:37:56 kubernetes-upgrade-20220601113329-16804 dockerd[525]: time="2022-06-01T18:37:56.265824542Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 01 18:37:56 kubernetes-upgrade-20220601113329-16804 dockerd[525]: time="2022-06-01T18:37:56.265843337Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 01 18:37:56 kubernetes-upgrade-20220601113329-16804 dockerd[525]: time="2022-06-01T18:37:56.265849462Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 01 18:37:56 kubernetes-upgrade-20220601113329-16804 dockerd[525]: time="2022-06-01T18:37:56.540095285Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jun 01 18:37:56 kubernetes-upgrade-20220601113329-16804 dockerd[525]: time="2022-06-01T18:37:56.548734666Z" level=info msg="Loading containers: start."
	Jun 01 18:37:56 kubernetes-upgrade-20220601113329-16804 dockerd[525]: time="2022-06-01T18:37:56.621877705Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 01 18:37:56 kubernetes-upgrade-20220601113329-16804 dockerd[525]: time="2022-06-01T18:37:56.651835099Z" level=info msg="Loading containers: done."
	Jun 01 18:37:56 kubernetes-upgrade-20220601113329-16804 dockerd[525]: time="2022-06-01T18:37:56.661573415Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	Jun 01 18:37:56 kubernetes-upgrade-20220601113329-16804 dockerd[525]: time="2022-06-01T18:37:56.661638850Z" level=info msg="Daemon has completed initialization"
	Jun 01 18:37:56 kubernetes-upgrade-20220601113329-16804 systemd[1]: Started Docker Application Container Engine.
	Jun 01 18:37:56 kubernetes-upgrade-20220601113329-16804 dockerd[525]: time="2022-06-01T18:37:56.685363440Z" level=info msg="API listen on [::]:2376"
	Jun 01 18:37:56 kubernetes-upgrade-20220601113329-16804 dockerd[525]: time="2022-06-01T18:37:56.687832066Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 01 18:38:11 kubernetes-upgrade-20220601113329-16804 dockerd[525]: time="2022-06-01T18:38:11.658168809Z" level=info msg="ignoring event" container=ff070ae07e9a651851043435e98e2587acfee91f2af99e3d8a9c02b5fbfa828e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:38:11 kubernetes-upgrade-20220601113329-16804 dockerd[525]: time="2022-06-01T18:38:11.658199805Z" level=info msg="ignoring event" container=139c0c94fcdff511071390fadbbdaf7047f32544e6ef43394648edddc71bf88c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:38:11 kubernetes-upgrade-20220601113329-16804 dockerd[525]: time="2022-06-01T18:38:11.662510759Z" level=info msg="ignoring event" container=c600f7b9b748c6343f6e68ab1e354432be8dc475b79a9c59da249bf557da9285 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:38:11 kubernetes-upgrade-20220601113329-16804 dockerd[525]: time="2022-06-01T18:38:11.663794596Z" level=info msg="ignoring event" container=e9a27477dd700530009f39e329debdf7f6f29875bffe6ef98fb67a278d59bf24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:38:11 kubernetes-upgrade-20220601113329-16804 dockerd[525]: time="2022-06-01T18:38:11.670304316Z" level=info msg="ignoring event" container=eff1a7ba385f706d6f915138d03c16294f981168c45416a03b889b34b7d2500c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:38:11 kubernetes-upgrade-20220601113329-16804 dockerd[525]: time="2022-06-01T18:38:11.675549118Z" level=info msg="ignoring event" container=606f0400de993caff6e3c4e71984545174f277ad7c25f31ac4d1205f6c3a2cd9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:38:12 kubernetes-upgrade-20220601113329-16804 dockerd[525]: time="2022-06-01T18:38:12.666718643Z" level=info msg="ignoring event" container=d1577f830c3df1ed15310666ab354c491250642acacf6c1fcf8db087114bd3e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:38:12 kubernetes-upgrade-20220601113329-16804 dockerd[525]: time="2022-06-01T18:38:12.683332062Z" level=info msg="ignoring event" container=17cee5359bff49d8a5e22daedb6655b67b92d9df1969b2c65b1a84dfd0fb22f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	69b72ee34558e       595f327f224a4       8 seconds ago       Running             kube-scheduler            1                   1a9421d2bec9f
	2030deea94891       8fa62c12256df       8 seconds ago       Running             kube-apiserver            1                   a93bcd3abab6a
	219fa9989e041       df7b72818ad2e       8 seconds ago       Running             kube-controller-manager   1                   169b7d7fa54e7
	2feee9c6acf22       25f8c7f3da61c       8 seconds ago       Running             etcd                      1                   7735dd883e072
	c600f7b9b748c       25f8c7f3da61c       23 seconds ago      Exited              etcd                      0                   ff070ae07e9a6
	606f0400de993       df7b72818ad2e       23 seconds ago      Exited              kube-controller-manager   0                   e9a27477dd700
	17cee5359bff4       8fa62c12256df       23 seconds ago      Exited              kube-apiserver            0                   139c0c94fcdff
	d1577f830c3df       595f327f224a4       23 seconds ago      Exited              kube-scheduler            0                   eff1a7ba385f7
	
	* 
	* ==> describe nodes <==
	* Name:               kubernetes-upgrade-20220601113329-16804
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-20220601113329-16804
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 18:38:02 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-20220601113329-16804
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Jun 2022 18:38:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 18:38:18 +0000   Wed, 01 Jun 2022 18:37:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 18:38:18 +0000   Wed, 01 Jun 2022 18:37:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 18:38:18 +0000   Wed, 01 Jun 2022 18:37:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Jun 2022 18:38:18 +0000   Wed, 01 Jun 2022 18:38:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    kubernetes-upgrade-20220601113329-16804
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	  System UUID:                6d3c90e0-76a5-4c92-a99a-428b7ad2f075
	  Boot ID:                    60fb2c64-72ec-41ec-9cdf-c18d3fde7c60
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                               CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                               ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-20220601113329-16804                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         18s
	  kube-system                 kube-apiserver-kubernetes-upgrade-20220601113329-16804             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-20220601113329-16804    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14s
	  kube-system                 kube-scheduler-kubernetes-upgrade-20220601113329-16804             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 24s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)  kubelet  Node kubernetes-upgrade-20220601113329-16804 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet  Node kubernetes-upgrade-20220601113329-16804 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)  kubelet  Node kubernetes-upgrade-20220601113329-16804 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                kubelet  Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.001517] FS-Cache: O-key=[8] 'dadf850300000000'
	[  +0.001304] FS-Cache: N-cookie c=000000009a04840d [p=00000000c3c39397 fl=2 nc=0 na=1]
	[  +0.001865] FS-Cache: N-cookie d=00000000d2bd0c1d n=00000000fbe96da6
	[  +0.001445] FS-Cache: N-key=[8] 'dadf850300000000'
	[  +0.001975] FS-Cache: Duplicate cookie detected
	[  +0.001109] FS-Cache: O-cookie c=000000004f2e48df [p=00000000c3c39397 fl=226 nc=0 na=1]
	[  +0.001769] FS-Cache: O-cookie d=00000000d2bd0c1d n=0000000018717240
	[  +0.001448] FS-Cache: O-key=[8] 'dadf850300000000'
	[  +0.001135] FS-Cache: N-cookie c=000000009a04840d [p=00000000c3c39397 fl=2 nc=0 na=1]
	[  +0.002013] FS-Cache: N-cookie d=00000000d2bd0c1d n=00000000571aa914
	[  +0.001438] FS-Cache: N-key=[8] 'dadf850300000000'
	[  +3.424117] FS-Cache: Duplicate cookie detected
	[  +0.001104] FS-Cache: O-cookie c=0000000002754839 [p=00000000c3c39397 fl=226 nc=0 na=1]
	[  +0.001774] FS-Cache: O-cookie d=00000000d2bd0c1d n=0000000062f4d47d
	[  +0.001610] FS-Cache: O-key=[8] 'd9df850300000000'
	[  +0.001207] FS-Cache: N-cookie c=00000000143faa29 [p=00000000c3c39397 fl=2 nc=0 na=1]
	[  +0.001813] FS-Cache: N-cookie d=00000000d2bd0c1d n=000000004f0ade46
	[  +0.001508] FS-Cache: N-key=[8] 'd9df850300000000'
	[  +0.463678] FS-Cache: Duplicate cookie detected
	[  +0.001235] FS-Cache: O-cookie c=00000000f58304b4 [p=00000000c3c39397 fl=226 nc=0 na=1]
	[  +0.001831] FS-Cache: O-cookie d=00000000d2bd0c1d n=000000005c1310b5
	[  +0.001510] FS-Cache: O-key=[8] 'e4df850300000000'
	[  +0.001109] FS-Cache: N-cookie c=0000000057f6365e [p=00000000c3c39397 fl=2 nc=0 na=1]
	[  +0.002017] FS-Cache: N-cookie d=00000000d2bd0c1d n=00000000d6cb9dcd
	[  +0.001494] FS-Cache: N-key=[8] 'e4df850300000000'
	
	* 
	* ==> etcd [2feee9c6acf2] <==
	* {"level":"info","ts":"2022-06-01T18:38:15.004Z","caller":"etcdserver/server.go:843","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.1","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-06-01T18:38:15.005Z","caller":"etcdserver/server.go:744","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-06-01T18:38:15.006Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2022-06-01T18:38:15.006Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2022-06-01T18:38:15.006Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T18:38:15.006Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T18:38:15.007Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-01T18:38:15.007Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-01T18:38:15.007Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-01T18:38:15.007Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T18:38:15.007Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T18:38:16.495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2022-06-01T18:38:16.495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2022-06-01T18:38:16.495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-01T18:38:16.495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2022-06-01T18:38:16.495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2022-06-01T18:38:16.495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2022-06-01T18:38:16.495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2022-06-01T18:38:16.496Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:kubernetes-upgrade-20220601113329-16804 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T18:38:16.496Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T18:38:16.497Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-01T18:38:16.497Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T18:38:16.498Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T18:38:16.498Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T18:38:16.498Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	
	* 
	* ==> etcd [c600f7b9b748] <==
	* {"level":"info","ts":"2022-06-01T18:38:00.413Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T18:38:00.414Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T18:38:00.413Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T18:38:00.414Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T18:38:00.414Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T18:38:00.414Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T18:38:00.415Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-01T18:38:00.419Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2022-06-01T18:38:06.497Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"108.631048ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/service-controller\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-01T18:38:06.497Z","caller":"traceutil/trace.go:171","msg":"trace[1757516173] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-controller; range_end:; response_count:0; response_revision:325; }","duration":"108.92052ms","start":"2022-06-01T18:38:06.388Z","end":"2022-06-01T18:38:06.497Z","steps":["trace[1757516173] 'agreement among raft nodes before linearized reading'  (duration: 33.210145ms)","trace[1757516173] 'range keys from in-memory index tree'  (duration: 75.40326ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T18:38:06.740Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"145.271156ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:353"}
	{"level":"info","ts":"2022-06-01T18:38:06.740Z","caller":"traceutil/trace.go:171","msg":"trace[1726807805] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:328; }","duration":"145.36893ms","start":"2022-06-01T18:38:06.595Z","end":"2022-06-01T18:38:06.740Z","steps":["trace[1726807805] 'agreement among raft nodes before linearized reading'  (duration: 33.068698ms)","trace[1726807805] 'range keys from in-memory index tree'  (duration: 112.166111ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T18:38:06.741Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"112.246821ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128013404342122297 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/service-controller\" mod_revision:326 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/service-controller\" value_size:172 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-system/service-controller\" > >>","response":"size:16"}
	{"level":"info","ts":"2022-06-01T18:38:06.741Z","caller":"traceutil/trace.go:171","msg":"trace[274753316] transaction","detail":"{read_only:false; response_revision:329; number_of_response:1; }","duration":"145.783252ms","start":"2022-06-01T18:38:06.595Z","end":"2022-06-01T18:38:06.741Z","steps":["trace[274753316] 'process raft request'  (duration: 32.809612ms)","trace[274753316] 'compare'  (duration: 112.014914ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T18:38:06.953Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"113.268473ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128013404342122305 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/bootstrap-signer\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/bootstrap-signer\" value_size:123 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2022-06-01T18:38:06.954Z","caller":"traceutil/trace.go:171","msg":"trace[448716459] transaction","detail":"{read_only:false; response_revision:332; number_of_response:1; }","duration":"113.891565ms","start":"2022-06-01T18:38:06.840Z","end":"2022-06-01T18:38:06.954Z","steps":["trace[448716459] 'compare'  (duration: 113.127186ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-01T18:38:06.954Z","caller":"traceutil/trace.go:171","msg":"trace[1662764803] transaction","detail":"{read_only:false; response_revision:333; number_of_response:1; }","duration":"113.957783ms","start":"2022-06-01T18:38:06.840Z","end":"2022-06-01T18:38:06.954Z","steps":["trace[1662764803] 'process raft request'  (duration: 113.617024ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-01T18:38:11.595Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-06-01T18:38:11.595Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"kubernetes-upgrade-20220601113329-16804","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	WARNING: 2022/06/01 18:38:11 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/06/01 18:38:11 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-06-01T18:38:11.603Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2022-06-01T18:38:11.605Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T18:38:11.606Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-01T18:38:11.606Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"kubernetes-upgrade-20220601113329-16804","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> kernel <==
	*  18:38:23 up 41 min,  0 users,  load average: 1.42, 1.03, 1.00
	Linux kubernetes-upgrade-20220601113329-16804 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [17cee5359bff] <==
	* W0601 18:38:12.599067       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 18:38:12.599092       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 18:38:12.599113       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 18:38:12.599118       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 18:38:12.599136       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 18:38:12.599142       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 18:38:12.599143       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 18:38:12.599142       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 18:38:12.599245       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 18:38:12.599262       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 18:38:12.599274       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 18:38:12.599409       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 18:38:12.599444       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 18:38:12.599444       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 18:38:12.599461       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 18:38:12.599526       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 18:38:12.599425       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 18:38:12.599560       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 18:38:12.599586       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 18:38:12.599586       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 18:38:12.599589       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 18:38:12.599564       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 18:38:12.599567       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 18:38:12.599620       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 18:38:12.599623       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-apiserver [2030deea9489] <==
	* I0601 18:38:18.154721       1 naming_controller.go:291] Starting NamingConditionController
	I0601 18:38:18.154735       1 establishing_controller.go:76] Starting EstablishingController
	I0601 18:38:18.154751       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0601 18:38:18.154776       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0601 18:38:18.154789       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0601 18:38:18.168901       1 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0601 18:38:18.170093       1 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0601 18:38:18.180571       1 controller.go:157] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0601 18:38:18.238411       1 cache.go:39] Caches are synced for autoregister controller
	I0601 18:38:18.239845       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0601 18:38:18.240094       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0601 18:38:18.240137       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0601 18:38:18.243755       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0601 18:38:18.243888       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0601 18:38:18.248983       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0601 18:38:18.249537       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0601 18:38:18.358494       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0601 18:38:19.136266       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0601 18:38:19.136336       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0601 18:38:19.141617       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0601 18:38:19.759156       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0601 18:38:19.766791       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0601 18:38:19.788764       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0601 18:38:19.801477       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0601 18:38:19.806792       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [219fa9989e04] <==
	* I0601 18:38:16.293301       1 serving.go:348] Generated self-signed cert in-memory
	I0601 18:38:16.653527       1 controllermanager.go:196] Version: v1.23.6
	I0601 18:38:16.654667       1 secure_serving.go:200] Serving securely on 127.0.0.1:10257
	I0601 18:38:16.654758       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0601 18:38:16.654809       1 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0601 18:38:16.654984       1 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0601 18:38:20.355789       1 shared_informer.go:240] Waiting for caches to sync for tokens
	I0601 18:38:20.364185       1 controllermanager.go:605] Started "endpointslicemirroring"
	I0601 18:38:20.364357       1 endpointslicemirroring_controller.go:212] Starting EndpointSliceMirroring controller
	I0601 18:38:20.364365       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring
	I0601 18:38:20.369476       1 controllermanager.go:605] Started "serviceaccount"
	I0601 18:38:20.369620       1 serviceaccounts_controller.go:117] Starting service account controller
	I0601 18:38:20.369627       1 shared_informer.go:240] Waiting for caches to sync for service account
	I0601 18:38:20.374515       1 controllermanager.go:605] Started "cronjob"
	I0601 18:38:20.374651       1 cronjob_controllerv2.go:132] "Starting cronjob controller v2"
	I0601 18:38:20.374662       1 shared_informer.go:240] Waiting for caches to sync for cronjob
	I0601 18:38:20.377195       1 node_ipam_controller.go:91] Sending events to api server.
	I0601 18:38:20.456546       1 shared_informer.go:247] Caches are synced for tokens 
	
	* 
	* ==> kube-controller-manager [606f0400de99] <==
	* I0601 18:38:05.754389       1 controllermanager.go:605] Started "serviceaccount"
	I0601 18:38:05.754446       1 serviceaccounts_controller.go:117] Starting service account controller
	I0601 18:38:05.754452       1 shared_informer.go:240] Waiting for caches to sync for service account
	I0601 18:38:05.889161       1 controllermanager.go:605] Started "replicaset"
	I0601 18:38:05.889194       1 replica_set.go:186] Starting replicaset controller
	I0601 18:38:05.889201       1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
	I0601 18:38:06.098392       1 controllermanager.go:605] Started "ephemeral-volume"
	I0601 18:38:06.098550       1 controller.go:170] Starting ephemeral volume controller
	I0601 18:38:06.098557       1 shared_informer.go:240] Waiting for caches to sync for ephemeral
	I0601 18:38:06.188649       1 controllermanager.go:605] Started "endpointslice"
	I0601 18:38:06.188710       1 endpointslice_controller.go:257] Starting endpoint slice controller
	I0601 18:38:06.188717       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice
	I0601 18:38:06.338208       1 controllermanager.go:605] Started "ttl"
	I0601 18:38:06.338257       1 ttl_controller.go:121] Starting TTL controller
	I0601 18:38:06.338263       1 shared_informer.go:240] Waiting for caches to sync for TTL
	E0601 18:38:06.503480       1 core.go:92] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
	W0601 18:38:06.503512       1 controllermanager.go:583] Skipping "service"
	W0601 18:38:06.503520       1 core.go:226] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes.
	W0601 18:38:06.503523       1 controllermanager.go:583] Skipping "route"
	I0601 18:38:06.800659       1 controllermanager.go:605] Started "job"
	I0601 18:38:06.800874       1 job_controller.go:184] Starting job controller
	I0601 18:38:06.800911       1 shared_informer.go:240] Waiting for caches to sync for job
	I0601 18:38:06.955853       1 controllermanager.go:605] Started "bootstrapsigner"
	I0601 18:38:06.955966       1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer
	I0601 18:38:07.029898       1 node_ipam_controller.go:91] Sending events to api server.
	
	* 
	* ==> kube-scheduler [69b72ee34558] <==
	* I0601 18:38:16.112582       1 serving.go:348] Generated self-signed cert in-memory
	W0601 18:38:18.177553       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0601 18:38:18.177590       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0601 18:38:18.177598       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0601 18:38:18.177602       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0601 18:38:18.253635       1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.6"
	I0601 18:38:18.255634       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0601 18:38:18.255914       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I0601 18:38:18.256314       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0601 18:38:18.256151       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0601 18:38:18.356844       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [d1577f830c3d] <==
	* W0601 18:38:02.150143       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0601 18:38:02.150366       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0601 18:38:02.150420       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0601 18:38:02.150375       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0601 18:38:02.150285       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 18:38:02.150440       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0601 18:38:02.150249       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0601 18:38:02.150448       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0601 18:38:02.150604       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0601 18:38:02.150662       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0601 18:38:02.150692       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0601 18:38:02.150671       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0601 18:38:02.150835       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 18:38:02.150875       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0601 18:38:02.150847       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0601 18:38:02.151351       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0601 18:38:02.151503       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 18:38:02.151549       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0601 18:38:03.199675       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 18:38:03.199723       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 18:38:03.251094       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0601 18:38:03.789896       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0601 18:38:11.602680       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0601 18:38:11.602861       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0601 18:38:11.602938       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 18:37:44 UTC, end at Wed 2022-06-01 18:38:24 UTC. --
	Jun 01 18:38:16 kubernetes-upgrade-20220601113329-16804 kubelet[2528]: E0601 18:38:16.072097    2528 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601113329-16804\" not found"
	Jun 01 18:38:16 kubernetes-upgrade-20220601113329-16804 kubelet[2528]: E0601 18:38:16.173390    2528 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601113329-16804\" not found"
	Jun 01 18:38:16 kubernetes-upgrade-20220601113329-16804 kubelet[2528]: E0601 18:38:16.274636    2528 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601113329-16804\" not found"
	Jun 01 18:38:16 kubernetes-upgrade-20220601113329-16804 kubelet[2528]: E0601 18:38:16.374784    2528 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601113329-16804\" not found"
	Jun 01 18:38:16 kubernetes-upgrade-20220601113329-16804 kubelet[2528]: E0601 18:38:16.477673    2528 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601113329-16804\" not found"
	Jun 01 18:38:16 kubernetes-upgrade-20220601113329-16804 kubelet[2528]: E0601 18:38:16.578276    2528 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601113329-16804\" not found"
	Jun 01 18:38:16 kubernetes-upgrade-20220601113329-16804 kubelet[2528]: E0601 18:38:16.678824    2528 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601113329-16804\" not found"
	Jun 01 18:38:16 kubernetes-upgrade-20220601113329-16804 kubelet[2528]: E0601 18:38:16.779549    2528 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601113329-16804\" not found"
	Jun 01 18:38:16 kubernetes-upgrade-20220601113329-16804 kubelet[2528]: E0601 18:38:16.880361    2528 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601113329-16804\" not found"
	Jun 01 18:38:16 kubernetes-upgrade-20220601113329-16804 kubelet[2528]: E0601 18:38:16.981157    2528 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601113329-16804\" not found"
	Jun 01 18:38:17 kubernetes-upgrade-20220601113329-16804 kubelet[2528]: E0601 18:38:17.082078    2528 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601113329-16804\" not found"
	Jun 01 18:38:17 kubernetes-upgrade-20220601113329-16804 kubelet[2528]: E0601 18:38:17.183108    2528 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601113329-16804\" not found"
	Jun 01 18:38:17 kubernetes-upgrade-20220601113329-16804 kubelet[2528]: E0601 18:38:17.283568    2528 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601113329-16804\" not found"
	Jun 01 18:38:17 kubernetes-upgrade-20220601113329-16804 kubelet[2528]: E0601 18:38:17.384337    2528 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601113329-16804\" not found"
	Jun 01 18:38:17 kubernetes-upgrade-20220601113329-16804 kubelet[2528]: E0601 18:38:17.485541    2528 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601113329-16804\" not found"
	Jun 01 18:38:17 kubernetes-upgrade-20220601113329-16804 kubelet[2528]: E0601 18:38:17.586363    2528 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601113329-16804\" not found"
	Jun 01 18:38:17 kubernetes-upgrade-20220601113329-16804 kubelet[2528]: E0601 18:38:17.687230    2528 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601113329-16804\" not found"
	Jun 01 18:38:17 kubernetes-upgrade-20220601113329-16804 kubelet[2528]: E0601 18:38:17.788160    2528 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601113329-16804\" not found"
	Jun 01 18:38:17 kubernetes-upgrade-20220601113329-16804 kubelet[2528]: E0601 18:38:17.889059    2528 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601113329-16804\" not found"
	Jun 01 18:38:17 kubernetes-upgrade-20220601113329-16804 kubelet[2528]: E0601 18:38:17.989403    2528 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601113329-16804\" not found"
	Jun 01 18:38:18 kubernetes-upgrade-20220601113329-16804 kubelet[2528]: E0601 18:38:18.090035    2528 kubelet.go:2461] "Error getting node" err="node \"kubernetes-upgrade-20220601113329-16804\" not found"
	Jun 01 18:38:18 kubernetes-upgrade-20220601113329-16804 kubelet[2528]: I0601 18:38:18.274848    2528 kubelet_node_status.go:108] "Node was previously registered" node="kubernetes-upgrade-20220601113329-16804"
	Jun 01 18:38:18 kubernetes-upgrade-20220601113329-16804 kubelet[2528]: I0601 18:38:18.274947    2528 kubelet_node_status.go:73] "Successfully registered node" node="kubernetes-upgrade-20220601113329-16804"
	Jun 01 18:38:19 kubernetes-upgrade-20220601113329-16804 kubelet[2528]: I0601 18:38:19.150015    2528 apiserver.go:52] "Watching apiserver"
	Jun 01 18:38:19 kubernetes-upgrade-20220601113329-16804 kubelet[2528]: I0601 18:38:19.202054    2528 reconciler.go:157] "Reconciler: start to sync state"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-20220601113329-16804 -n kubernetes-upgrade-20220601113329-16804
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-20220601113329-16804 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Done: kubectl --context kubernetes-upgrade-20220601113329-16804 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: (1.615595684s)
helpers_test.go:270: non-running pods: storage-provisioner
helpers_test.go:272: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context kubernetes-upgrade-20220601113329-16804 describe pod storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-20220601113329-16804 describe pod storage-provisioner: exit status 1 (46.190511ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context kubernetes-upgrade-20220601113329-16804 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220601113329-16804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20220601113329-16804
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20220601113329-16804: (3.334571403s)
--- FAIL: TestKubernetesUpgrade (301.40s)

                                                
                                    
x
+
TestMissingContainerUpgrade (46.44s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.50552863.exe start -p missing-upgrade-20220601113242-16804 --memory=2200 --driver=docker 
E0601 11:33:14.649835   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.50552863.exe start -p missing-upgrade-20220601113242-16804 --memory=2200 --driver=docker : exit status 78 (31.796861458s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220601113242-16804] minikube v1.9.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-20220601113242-16804
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* Deleting "missing-upgrade-20220601113242-16804" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 42.66 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 98.89 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 161.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 222.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 281.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 342.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 402.88 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 453.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 506.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-01 18:32:56.113681937 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* [DOCKER_RESTART_FAILED] Failed to start docker container. "minikube start -p missing-upgrade-20220601113242-16804" may fix it. creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-01 18:33:12.596933972 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Suggestion: Remove the incompatible --docker-opt flag if one was provided
	* Related issue: https://github.com/kubernetes/minikube/issues/7070

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.50552863.exe start -p missing-upgrade-20220601113242-16804 --memory=2200 --driver=docker 
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.50552863.exe start -p missing-upgrade-20220601113242-16804 --memory=2200 --driver=docker : exit status 70 (4.127675025s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220601113242-16804] minikube v1.9.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20220601113242-16804
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-20220601113242-16804" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.50552863.exe start -p missing-upgrade-20220601113242-16804 --memory=2200 --driver=docker 
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.50552863.exe start -p missing-upgrade-20220601113242-16804 --memory=2200 --driver=docker : exit status 70 (4.214954409s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220601113242-16804] minikube v1.9.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20220601113242-16804
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-20220601113242-16804" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:322: release start failed: exit status 70
panic.go:482: *** TestMissingContainerUpgrade FAILED at 2022-06-01 11:33:26.120659 -0700 PDT m=+2168.062038512
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-20220601113242-16804
helpers_test.go:235: (dbg) docker inspect missing-upgrade-20220601113242-16804:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b6986e13a9113a48a1b2b0a0f713e45f95e858c09d3019baec7dbb6acd5c84b4",
	        "Created": "2022-06-01T18:33:04.34041186Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 124061,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T18:33:04.564685229Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/b6986e13a9113a48a1b2b0a0f713e45f95e858c09d3019baec7dbb6acd5c84b4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b6986e13a9113a48a1b2b0a0f713e45f95e858c09d3019baec7dbb6acd5c84b4/hostname",
	        "HostsPath": "/var/lib/docker/containers/b6986e13a9113a48a1b2b0a0f713e45f95e858c09d3019baec7dbb6acd5c84b4/hosts",
	        "LogPath": "/var/lib/docker/containers/b6986e13a9113a48a1b2b0a0f713e45f95e858c09d3019baec7dbb6acd5c84b4/b6986e13a9113a48a1b2b0a0f713e45f95e858c09d3019baec7dbb6acd5c84b4-json.log",
	        "Name": "/missing-upgrade-20220601113242-16804",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-20220601113242-16804:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/849eddd68d643a82047835a3b5f3d451ebcc19bf0e453019e0530f1459cfeb35-init/diff:/var/lib/docker/overlay2/5a5021b04d40486c3f899d3d86469c69d0a0a3a6aedb4a262808e8e0e3212dd9/diff:/var/lib/docker/overlay2/34d2fad93be8a8b08db19932b165d6e4ee12c642f5b9a71ae0da16e41e895455/diff:/var/lib/docker/overlay2/a519d8b71fe163aad87235d12fd7596db7d55f7f2c546ea938ac5b44f16b652f/diff:/var/lib/docker/overlay2/2f15e48f7fd9f51c0246edf680b5bf5101d756e18f610fe615defe179c7ff534/diff:/var/lib/docker/overlay2/b3950a464734420ac98826fd7846d239d550db1d1ae773f32fd285af845cdf22/diff:/var/lib/docker/overlay2/8988ddfdbc34033c8f6dfbda80a939b635699c7799196fc6e1c67870aa3a98fe/diff:/var/lib/docker/overlay2/7ba0245eca92a262dcf5985ae53e44b4246b2148cf3041b19299c4824436c857/diff:/var/lib/docker/overlay2/6c8ceadb783c54050c9822b7a9c7e32f5c8c95922ec59c1027de2484daecd2b4/diff:/var/lib/docker/overlay2/35b8de062c6e2440d11c06c0221db2bc4763da7dcc75f1ff234a1a6620f908c0/diff:/var/lib/docker/overlay2/3584c2
bd1bdbc4f33ae8409b002bb9449ef69f5eac5efaf3029bafd8e59e616d/diff:/var/lib/docker/overlay2/89f35c1cfd5f4b4711c8faf3c75a939b4b42ad8280d52e46ed9174898ebd4dea/diff:/var/lib/docker/overlay2/ba52e45aa55684244ce68ffb6f37275e672a920729ea5be00e4cc02625a11336/diff:/var/lib/docker/overlay2/88f06922766e6932db8f1d9662f093b42c354676160da5d7d627df01138940d2/diff:/var/lib/docker/overlay2/e30f8690cf13147aeb6cc0f6af6a5cc429942a49d65fc69df4976e32002b2c9c/diff:/var/lib/docker/overlay2/a013d03dab2547e58c77f48109fc20ac70497dba6843d25ae3705c054244401e/diff:/var/lib/docker/overlay2/cdb70bf8140c088f0dea40152c2a2ce37a40912c2a58e90e93f143d49795084f/diff:/var/lib/docker/overlay2/65b836a39622281946b823eb252606e8e09382a0f51a3fd2000a31247d55db47/diff:/var/lib/docker/overlay2/ba32c157bb001a6bdee2dd25782f9072b8f2c1f17dd60711c5dc96767ca3633e/diff:/var/lib/docker/overlay2/ebafcf8827f052a7339d84dae13db8562e7c9ff8c83ab195475000d74a29cb36/diff:/var/lib/docker/overlay2/be3502d132a8b884468dd4a5bcd811e32bd090fb7b255d888e53c9d4014ba2e0/diff:/var/lib/d
ocker/overlay2/f3b71613f15fd8e9cf665f9751d01943a85c6e1f36bc8a4317db3788ca9a6d68/diff",
	                "MergedDir": "/var/lib/docker/overlay2/849eddd68d643a82047835a3b5f3d451ebcc19bf0e453019e0530f1459cfeb35/merged",
	                "UpperDir": "/var/lib/docker/overlay2/849eddd68d643a82047835a3b5f3d451ebcc19bf0e453019e0530f1459cfeb35/diff",
	                "WorkDir": "/var/lib/docker/overlay2/849eddd68d643a82047835a3b5f3d451ebcc19bf0e453019e0530f1459cfeb35/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-20220601113242-16804",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-20220601113242-16804/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-20220601113242-16804",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-20220601113242-16804",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-20220601113242-16804",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6d2520f98bd946a463abba625ce808acfeaa7c123dfcc09c6a6bd835fea42e16",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52497"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52498"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52499"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6d2520f98bd9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "af4f084bf2b4ed34a5ac557cfe0a9b19fa717011b0efcf4b5fce43037751b483",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "f7b7bc4a3013ef683cda90da688bca751e398b5f704691ef347943a02e924737",
	                    "EndpointID": "af4f084bf2b4ed34a5ac557cfe0a9b19fa717011b0efcf4b5fce43037751b483",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-20220601113242-16804 -n missing-upgrade-20220601113242-16804
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-20220601113242-16804 -n missing-upgrade-20220601113242-16804: exit status 6 (463.147437ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:33:26.640868   24336 status.go:413] kubeconfig endpoint: extract IP: "missing-upgrade-20220601113242-16804" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-20220601113242-16804" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-20220601113242-16804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-20220601113242-16804
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-20220601113242-16804: (2.408873273s)
--- FAIL: TestMissingContainerUpgrade (46.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (36.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.301425567.exe start -p stopped-upgrade-20220601113821-16804 --memory=2200 --vm-driver=docker 

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.301425567.exe start -p stopped-upgrade-20220601113821-16804 --memory=2200 --vm-driver=docker : exit status 70 (24.450366197s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220601113821-16804] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig2108023584
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-01 18:38:32.161811998 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "stopped-upgrade-20220601113821-16804" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-01 18:38:43.649811969 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p stopped-upgrade-20220601113821-16804", then "minikube start -p stopped-upgrade-20220601113821-16804 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 4.05 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 18.55 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 22.91 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 35.64 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 49.97 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 62.12 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 81.45 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 102.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 111.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 161.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 217.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 250.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 261.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 267.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 275.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 282.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 285.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 290.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 296.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 313.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 345.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 384.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 424.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 465.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 501.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-01 18:38:43.649811969 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.301425567.exe start -p stopped-upgrade-20220601113821-16804 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.301425567.exe start -p stopped-upgrade-20220601113821-16804 --memory=2200 --vm-driver=docker : exit status 70 (4.979245238s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220601113821-16804] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig1004888024
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-20220601113821-16804" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.301425567.exe start -p stopped-upgrade-20220601113821-16804 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.301425567.exe start -p stopped-upgrade-20220601113821-16804 --memory=2200 --vm-driver=docker : exit status 70 (4.466795537s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220601113821-16804] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig1874409365
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-20220601113821-16804" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (36.77s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (62.85s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-20220601113830-16804 --output=json --layout=cluster

                                                
                                                
=== CONT  TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-20220601113830-16804 --output=json --layout=cluster: exit status 2 (16.107044068s)

                                                
                                                
-- stdout --
	{"Name":"pause-20220601113830-16804","StatusCode":405,"StatusName":"Stopped","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.26.0-beta.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220601113830-16804","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
pause_test.go:200: incorrect status code: 405
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/VerifyStatus]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20220601113830-16804
helpers_test.go:235: (dbg) docker inspect pause-20220601113830-16804:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c18cc1c1f90261d417e054d0d98f8f447e33240bfc30cc877a74a8c9018e6d31",
	        "Created": "2022-06-01T18:38:37.143399045Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 140126,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T18:38:37.439764552Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/c18cc1c1f90261d417e054d0d98f8f447e33240bfc30cc877a74a8c9018e6d31/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c18cc1c1f90261d417e054d0d98f8f447e33240bfc30cc877a74a8c9018e6d31/hostname",
	        "HostsPath": "/var/lib/docker/containers/c18cc1c1f90261d417e054d0d98f8f447e33240bfc30cc877a74a8c9018e6d31/hosts",
	        "LogPath": "/var/lib/docker/containers/c18cc1c1f90261d417e054d0d98f8f447e33240bfc30cc877a74a8c9018e6d31/c18cc1c1f90261d417e054d0d98f8f447e33240bfc30cc877a74a8c9018e6d31-json.log",
	        "Name": "/pause-20220601113830-16804",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20220601113830-16804:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20220601113830-16804",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/06d5ea21c3df574429ee38a64a857d95ae203602f4c82137ff78be3cb7334180-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/06d5ea21c3df574429ee38a64a857d95ae203602f4c82137ff78be3cb7334180/merged",
	                "UpperDir": "/var/lib/docker/overlay2/06d5ea21c3df574429ee38a64a857d95ae203602f4c82137ff78be3cb7334180/diff",
	                "WorkDir": "/var/lib/docker/overlay2/06d5ea21c3df574429ee38a64a857d95ae203602f4c82137ff78be3cb7334180/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20220601113830-16804",
	                "Source": "/var/lib/docker/volumes/pause-20220601113830-16804/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20220601113830-16804",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20220601113830-16804",
	                "name.minikube.sigs.k8s.io": "pause-20220601113830-16804",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e8a8e71b0464c98f68e9293f0bd6f58443d92ae5596d5ad3238fb07e3a8231e5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54181"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54182"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54183"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54184"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54185"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e8a8e71b0464",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20220601113830-16804": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c18cc1c1f902",
	                        "pause-20220601113830-16804"
	                    ],
	                    "NetworkID": "77ff25d7e420a291bd7b52281f3d89d6113844f35e8ff109b3aa21b01cdf0cd7",
	                    "EndpointID": "a7fa2772776ef0581f6a59df6ec58b3a2f91dc42ab297e5f0dc25216fc5737ed",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20220601113830-16804 -n pause-20220601113830-16804

                                                
                                                
=== CONT  TestPause/serial/VerifyStatus
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20220601113830-16804 -n pause-20220601113830-16804: exit status 2 (16.107298325s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/VerifyStatus FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/VerifyStatus]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p pause-20220601113830-16804 logs -n 25
E0601 11:40:05.895336   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601112852-16804/client.crt: no such file or directory

                                                
                                                
=== CONT  TestPause/serial/VerifyStatus
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p pause-20220601113830-16804 logs -n 25: (14.435382087s)
helpers_test.go:252: TestPause/serial/VerifyStatus logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------|-----------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                  Args                   |                 Profile                 |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------------|-----------------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | docker-flags-20220601113057-16804       | docker-flags-20220601113057-16804       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:31 PDT | 01 Jun 22 11:31 PDT |
	|         | ssh sudo systemctl show docker          |                                         |         |                |                     |                     |
	|         | --property=ExecStart --no-pager         |                                         |         |                |                     |                     |
	| delete  | -p                                      | docker-flags-20220601113057-16804       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:31 PDT | 01 Jun 22 11:31 PDT |
	|         | docker-flags-20220601113057-16804       |                                         |         |                |                     |                     |
	| start   | -p                                      | cert-options-20220601113126-16804       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:31 PDT | 01 Jun 22 11:31 PDT |
	|         | cert-options-20220601113126-16804       |                                         |         |                |                     |                     |
	|         | --memory=2048                           |                                         |         |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1               |                                         |         |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15           |                                         |         |                |                     |                     |
	|         | --apiserver-names=localhost             |                                         |         |                |                     |                     |
	|         | --apiserver-names=www.google.com        |                                         |         |                |                     |                     |
	|         | --apiserver-port=8555                   |                                         |         |                |                     |                     |
	|         | --driver=docker                         |                                         |         |                |                     |                     |
	|         | --apiserver-name=localhost              |                                         |         |                |                     |                     |
	| ssh     | cert-options-20220601113126-16804       | cert-options-20220601113126-16804       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:31 PDT | 01 Jun 22 11:31 PDT |
	|         | ssh openssl x509 -text -noout -in       |                                         |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt   |                                         |         |                |                     |                     |
	| ssh     | -p                                      | cert-options-20220601113126-16804       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:31 PDT | 01 Jun 22 11:31 PDT |
	|         | cert-options-20220601113126-16804       |                                         |         |                |                     |                     |
	|         | -- sudo cat                             |                                         |         |                |                     |                     |
	|         | /etc/kubernetes/admin.conf              |                                         |         |                |                     |                     |
	| delete  | -p                                      | cert-options-20220601113126-16804       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:31 PDT | 01 Jun 22 11:31 PDT |
	|         | cert-options-20220601113126-16804       |                                         |         |                |                     |                     |
	| delete  | -p                                      | running-upgrade-20220601113155-16804    | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:32 PDT | 01 Jun 22 11:32 PDT |
	|         | running-upgrade-20220601113155-16804    |                                         |         |                |                     |                     |
	| delete  | -p                                      | missing-upgrade-20220601113242-16804    | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:33 PDT | 01 Jun 22 11:33 PDT |
	|         | missing-upgrade-20220601113242-16804    |                                         |         |                |                     |                     |
	| start   | -p                                      | cert-expiration-20220601113122-16804    | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:31 PDT | 01 Jun 22 11:35 PDT |
	|         | cert-expiration-20220601113122-16804    |                                         |         |                |                     |                     |
	|         | --memory=2048 --cert-expiration=3m      |                                         |         |                |                     |                     |
	|         | --driver=docker                         |                                         |         |                |                     |                     |
	| stop    | -p                                      | kubernetes-upgrade-20220601113329-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:37 PDT | 01 Jun 22 11:37 PDT |
	|         | kubernetes-upgrade-20220601113329-16804 |                                         |         |                |                     |                     |
	| start   | -p                                      | kubernetes-upgrade-20220601113329-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:37 PDT | 01 Jun 22 11:38 PDT |
	|         | kubernetes-upgrade-20220601113329-16804 |                                         |         |                |                     |                     |
	|         | --memory=2200                           |                                         |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6            |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker  |                                         |         |                |                     |                     |
	| start   | -p                                      | cert-expiration-20220601113122-16804    | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:38 PDT | 01 Jun 22 11:38 PDT |
	|         | cert-expiration-20220601113122-16804    |                                         |         |                |                     |                     |
	|         | --memory=2048                           |                                         |         |                |                     |                     |
	|         | --cert-expiration=8760h                 |                                         |         |                |                     |                     |
	|         | --driver=docker                         |                                         |         |                |                     |                     |
	| delete  | -p                                      | cert-expiration-20220601113122-16804    | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:38 PDT | 01 Jun 22 11:38 PDT |
	|         | cert-expiration-20220601113122-16804    |                                         |         |                |                     |                     |
	| start   | -p                                      | kubernetes-upgrade-20220601113329-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:38 PDT | 01 Jun 22 11:38 PDT |
	|         | kubernetes-upgrade-20220601113329-16804 |                                         |         |                |                     |                     |
	|         | --memory=2200                           |                                         |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6            |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker  |                                         |         |                |                     |                     |
	| logs    | kubernetes-upgrade-20220601113329-16804 | kubernetes-upgrade-20220601113329-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:38 PDT | 01 Jun 22 11:38 PDT |
	|         | logs -n 25                              |                                         |         |                |                     |                     |
	| delete  | -p                                      | kubernetes-upgrade-20220601113329-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:38 PDT | 01 Jun 22 11:38 PDT |
	|         | kubernetes-upgrade-20220601113329-16804 |                                         |         |                |                     |                     |
	| logs    | -p                                      | stopped-upgrade-20220601113821-16804    | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:38 PDT | 01 Jun 22 11:39 PDT |
	|         | stopped-upgrade-20220601113821-16804    |                                         |         |                |                     |                     |
	| delete  | -p                                      | stopped-upgrade-20220601113821-16804    | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:39 PDT | 01 Jun 22 11:39 PDT |
	|         | stopped-upgrade-20220601113821-16804    |                                         |         |                |                     |                     |
	| start   | -p pause-20220601113830-16804           | pause-20220601113830-16804              | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:38 PDT | 01 Jun 22 11:39 PDT |
	|         | --memory=2048                           |                                         |         |                |                     |                     |
	|         | --install-addons=false                  |                                         |         |                |                     |                     |
	|         | --wait=all --driver=docker              |                                         |         |                |                     |                     |
	| start   | -p pause-20220601113830-16804           | pause-20220601113830-16804              | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:39 PDT | 01 Jun 22 11:39 PDT |
	|         | --alsologtostderr -v=1                  |                                         |         |                |                     |                     |
	|         | --driver=docker                         |                                         |         |                |                     |                     |
	| pause   | -p pause-20220601113830-16804           | pause-20220601113830-16804              | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:39 PDT | 01 Jun 22 11:39 PDT |
	|         | --alsologtostderr -v=5                  |                                         |         |                |                     |                     |
	| start   | -p                                      | NoKubernetes-20220601113904-16804       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:39 PDT | 01 Jun 22 11:39 PDT |
	|         | NoKubernetes-20220601113904-16804       |                                         |         |                |                     |                     |
	|         | --driver=docker                         |                                         |         |                |                     |                     |
	| start   | -p                                      | NoKubernetes-20220601113904-16804       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:39 PDT | 01 Jun 22 11:39 PDT |
	|         | NoKubernetes-20220601113904-16804       |                                         |         |                |                     |                     |
	|         | --no-kubernetes --driver=docker         |                                         |         |                |                     |                     |
	| delete  | -p                                      | NoKubernetes-20220601113904-16804       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:39 PDT | 01 Jun 22 11:39 PDT |
	|         | NoKubernetes-20220601113904-16804       |                                         |         |                |                     |                     |
	| start   | -p                                      | NoKubernetes-20220601113904-16804       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:39 PDT | 01 Jun 22 11:39 PDT |
	|         | NoKubernetes-20220601113904-16804       |                                         |         |                |                     |                     |
	|         | --no-kubernetes --driver=docker         |                                         |         |                |                     |                     |
	|---------|-----------------------------------------|-----------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 11:39:47
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 11:39:47.955240   25763 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:39:47.955441   25763 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:39:47.955443   25763 out.go:309] Setting ErrFile to fd 2...
	I0601 11:39:47.955446   25763 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:39:47.955538   25763 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:39:47.955842   25763 out.go:303] Setting JSON to false
	I0601 11:39:47.970610   25763 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":7757,"bootTime":1654101030,"procs":359,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 11:39:47.970718   25763 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:39:47.992843   25763 out.go:177] * [NoKubernetes-20220601113904-16804] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 11:39:48.035626   25763 notify.go:193] Checking for updates...
	I0601 11:39:48.057598   25763 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:39:48.079460   25763 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:39:48.100602   25763 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 11:39:48.122779   25763 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:39:48.144839   25763 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:39:48.167079   25763 config.go:178] Loaded profile config "pause-20220601113830-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:39:48.167105   25763 start.go:1656] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0601 11:39:48.167141   25763 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:39:48.239301   25763 docker.go:137] docker version: linux-20.10.14
	I0601 11:39:48.239431   25763 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:39:48.365492   25763 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 18:39:48.309217711 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:39:48.387739   25763 out.go:177] * Using the docker driver based on user configuration
	I0601 11:39:48.409304   25763 start.go:284] selected driver: docker
	I0601 11:39:48.409318   25763 start.go:806] validating driver "docker" against <nil>
	I0601 11:39:48.409338   25763 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:39:48.409628   25763 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:39:48.537008   25763 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 18:39:48.480685357 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:39:48.537107   25763 start.go:1656] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0601 11:39:48.537115   25763 start.go:1656] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0601 11:39:48.537124   25763 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 11:39:48.539115   25763 start_flags.go:373] Using suggested 5895MB memory alloc based on sys=32768MB, container=5943MB
	I0601 11:39:48.539219   25763 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0601 11:39:48.561088   25763 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 11:39:48.582972   25763 cni.go:95] Creating CNI manager for ""
	I0601 11:39:48.582994   25763 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:39:48.583024   25763 start_flags.go:306] config:
	{Name:NoKubernetes-20220601113904-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:NoKubernetes-20220601113904-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:39:48.583155   25763 start.go:1656] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0601 11:39:48.604899   25763 out.go:177] * Starting minikube without Kubernetes NoKubernetes-20220601113904-16804 in cluster NoKubernetes-20220601113904-16804
	I0601 11:39:48.648120   25763 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:39:48.669920   25763 out.go:177] * Pulling base image ...
	I0601 11:39:48.712961   25763 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime docker
	I0601 11:39:48.712969   25763 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:39:48.778299   25763 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 11:39:48.778316   25763 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	W0601 11:39:48.781946   25763 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I0601 11:39:48.782132   25763 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/NoKubernetes-20220601113904-16804/config.json ...
	I0601 11:39:48.782169   25763 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/NoKubernetes-20220601113904-16804/config.json: {Name:mk514d4625fd559e640e28c4ffe5710521b58d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:39:48.782493   25763 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:39:48.782534   25763 start.go:352] acquiring machines lock for NoKubernetes-20220601113904-16804: {Name:mk5551b6289f15f56b569278b19dbba9d7c25c67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:39:48.782584   25763 start.go:356] acquired machines lock for "NoKubernetes-20220601113904-16804" in 42.302µs
	I0601 11:39:48.782603   25763 start.go:91] Provisioning new machine with config: &{Name:NoKubernetes-20220601113904-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-20220601113904-16804 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 Kube
rnetesVersion:v0.0.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 11:39:48.782653   25763 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:39:48.825144   25763 out.go:204] * Creating docker container (CPUs=2, Memory=5895MB) ...
	I0601 11:39:48.825550   25763 start.go:165] libmachine.API.Create for "NoKubernetes-20220601113904-16804" (driver="docker")
	I0601 11:39:48.825590   25763 client.go:168] LocalClient.Create starting
	I0601 11:39:48.825737   25763 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem
	I0601 11:39:48.825795   25763 main.go:134] libmachine: Decoding PEM data...
	I0601 11:39:48.825815   25763 main.go:134] libmachine: Parsing certificate...
	I0601 11:39:48.825926   25763 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem
	I0601 11:39:48.825968   25763 main.go:134] libmachine: Decoding PEM data...
	I0601 11:39:48.825983   25763 main.go:134] libmachine: Parsing certificate...
	I0601 11:39:48.826806   25763 cli_runner.go:164] Run: docker network inspect NoKubernetes-20220601113904-16804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:39:48.890068   25763 cli_runner.go:211] docker network inspect NoKubernetes-20220601113904-16804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:39:48.890151   25763 network_create.go:272] running [docker network inspect NoKubernetes-20220601113904-16804] to gather additional debugging logs...
	I0601 11:39:48.890168   25763 cli_runner.go:164] Run: docker network inspect NoKubernetes-20220601113904-16804
	W0601 11:39:48.952594   25763 cli_runner.go:211] docker network inspect NoKubernetes-20220601113904-16804 returned with exit code 1
	I0601 11:39:48.952626   25763 network_create.go:275] error running [docker network inspect NoKubernetes-20220601113904-16804]: docker network inspect NoKubernetes-20220601113904-16804: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: NoKubernetes-20220601113904-16804
	I0601 11:39:48.952645   25763 network_create.go:277] output of [docker network inspect NoKubernetes-20220601113904-16804]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: NoKubernetes-20220601113904-16804
	
	** /stderr **
	I0601 11:39:48.952722   25763 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:39:49.015176   25763 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00000e2d0] misses:0}
	I0601 11:39:49.015206   25763 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:39:49.015221   25763 network_create.go:115] attempt to create docker network NoKubernetes-20220601113904-16804 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:39:49.015276   25763 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220601113904-16804
	W0601 11:39:49.077396   25763 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220601113904-16804 returned with exit code 1
	W0601 11:39:49.077427   25763 network_create.go:107] failed to create docker network NoKubernetes-20220601113904-16804 192.168.49.0/24, will retry: subnet is taken
	I0601 11:39:49.077716   25763 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000e2d0] amended:false}} dirty:map[] misses:0}
	I0601 11:39:49.077730   25763 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:39:49.077918   25763 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000e2d0] amended:true}} dirty:map[192.168.49.0:0xc00000e2d0 192.168.58.0:0xc000518618] misses:0}
	I0601 11:39:49.077928   25763 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:39:49.077933   25763 network_create.go:115] attempt to create docker network NoKubernetes-20220601113904-16804 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0601 11:39:49.077996   25763 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true NoKubernetes-20220601113904-16804
	I0601 11:39:49.172120   25763 network_create.go:99] docker network NoKubernetes-20220601113904-16804 192.168.58.0/24 created
	I0601 11:39:49.172166   25763 kic.go:106] calculated static IP "192.168.58.2" for the "NoKubernetes-20220601113904-16804" container
	I0601 11:39:49.172252   25763 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:39:49.238384   25763 cli_runner.go:164] Run: docker volume create NoKubernetes-20220601113904-16804 --label name.minikube.sigs.k8s.io=NoKubernetes-20220601113904-16804 --label created_by.minikube.sigs.k8s.io=true
	I0601 11:39:49.300848   25763 oci.go:103] Successfully created a docker volume NoKubernetes-20220601113904-16804
	I0601 11:39:49.300980   25763 cli_runner.go:164] Run: docker run --rm --name NoKubernetes-20220601113904-16804-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-20220601113904-16804 --entrypoint /usr/bin/test -v NoKubernetes-20220601113904-16804:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -d /var/lib
	I0601 11:39:49.775087   25763 oci.go:107] Successfully prepared a docker volume NoKubernetes-20220601113904-16804
	I0601 11:39:49.775142   25763 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime docker
	I0601 11:39:49.775254   25763 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0601 11:39:49.900843   25763 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname NoKubernetes-20220601113904-16804 --name NoKubernetes-20220601113904-16804 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-20220601113904-16804 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=NoKubernetes-20220601113904-16804 --network NoKubernetes-20220601113904-16804 --ip 192.168.58.2 --volume NoKubernetes-20220601113904-16804:/var --security-opt apparmor=unconfined --memory=5895mb --memory-swap=5895mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a
	I0601 11:39:50.290351   25763 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220601113904-16804 --format={{.State.Running}}
	I0601 11:39:50.364986   25763 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220601113904-16804 --format={{.State.Status}}
	I0601 11:39:50.446079   25763 cli_runner.go:164] Run: docker exec NoKubernetes-20220601113904-16804 stat /var/lib/dpkg/alternatives/iptables
	I0601 11:39:50.578493   25763 oci.go:247] the created container "NoKubernetes-20220601113904-16804" has a running status.
	I0601 11:39:50.578513   25763 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/NoKubernetes-20220601113904-16804/id_rsa...
	I0601 11:39:50.899575   25763 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/NoKubernetes-20220601113904-16804/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0601 11:39:51.010958   25763 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220601113904-16804 --format={{.State.Status}}
	I0601 11:39:51.081898   25763 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0601 11:39:51.081922   25763 kic_runner.go:114] Args: [docker exec --privileged NoKubernetes-20220601113904-16804 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0601 11:39:51.215384   25763 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220601113904-16804 --format={{.State.Status}}
	I0601 11:39:51.284843   25763 machine.go:88] provisioning docker machine ...
	I0601 11:39:51.284875   25763 ubuntu.go:169] provisioning hostname "NoKubernetes-20220601113904-16804"
	I0601 11:39:51.284994   25763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220601113904-16804
	I0601 11:39:51.355477   25763 main.go:134] libmachine: Using SSH client type: native
	I0601 11:39:51.355652   25763 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55297 <nil> <nil>}
	I0601 11:39:51.355665   25763 main.go:134] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-20220601113904-16804 && echo "NoKubernetes-20220601113904-16804" | sudo tee /etc/hostname
	I0601 11:39:51.481658   25763 main.go:134] libmachine: SSH cmd err, output: <nil>: NoKubernetes-20220601113904-16804
	
	I0601 11:39:51.481733   25763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220601113904-16804
	I0601 11:39:51.552413   25763 main.go:134] libmachine: Using SSH client type: native
	I0601 11:39:51.552630   25763 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55297 <nil> <nil>}
	I0601 11:39:51.552644   25763 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-20220601113904-16804' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-20220601113904-16804/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-20220601113904-16804' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 11:39:51.668720   25763 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:39:51.668762   25763 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 11:39:51.668786   25763 ubuntu.go:177] setting up certificates
	I0601 11:39:51.668793   25763 provision.go:83] configureAuth start
	I0601 11:39:51.668867   25763 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-20220601113904-16804
	I0601 11:39:51.738694   25763 provision.go:138] copyHostCerts
	I0601 11:39:51.738769   25763 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 11:39:51.738775   25763 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 11:39:51.738870   25763 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 11:39:51.739053   25763 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 11:39:51.739060   25763 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 11:39:51.739141   25763 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1675 bytes)
	I0601 11:39:51.739289   25763 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 11:39:51.739294   25763 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 11:39:51.739358   25763 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 11:39:51.739485   25763 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-20220601113904-16804 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube NoKubernetes-20220601113904-16804]
	I0601 11:39:51.945556   25763 provision.go:172] copyRemoteCerts
	I0601 11:39:51.945599   25763 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 11:39:51.945639   25763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220601113904-16804
	I0601 11:39:52.017265   25763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55297 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/NoKubernetes-20220601113904-16804/id_rsa Username:docker}
	I0601 11:39:52.103443   25763 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0601 11:39:52.120335   25763 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0601 11:39:52.138641   25763 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 11:39:52.156711   25763 provision.go:86] duration metric: configureAuth took 487.906426ms
	I0601 11:39:52.156722   25763 ubuntu.go:193] setting minikube options for container-runtime
	I0601 11:39:52.156866   25763 config.go:178] Loaded profile config "NoKubernetes-20220601113904-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0601 11:39:52.156930   25763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220601113904-16804
	I0601 11:39:52.227192   25763 main.go:134] libmachine: Using SSH client type: native
	I0601 11:39:52.227363   25763 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55297 <nil> <nil>}
	I0601 11:39:52.227385   25763 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 11:39:52.345276   25763 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 11:39:52.345287   25763 ubuntu.go:71] root file system type: overlay
	I0601 11:39:52.345421   25763 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 11:39:52.345517   25763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220601113904-16804
	I0601 11:39:52.416255   25763 main.go:134] libmachine: Using SSH client type: native
	I0601 11:39:52.416391   25763 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55297 <nil> <nil>}
	I0601 11:39:52.416464   25763 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 11:39:52.542901   25763 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 11:39:52.542994   25763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220601113904-16804
	I0601 11:39:52.612863   25763 main.go:134] libmachine: Using SSH client type: native
	I0601 11:39:52.613005   25763 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 55297 <nil> <nil>}
	I0601 11:39:52.613014   25763 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 11:39:53.195574   25763 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 09:15:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-01 18:39:52.556221946 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0601 11:39:53.195593   25763 machine.go:91] provisioned docker machine in 1.910747573s
	I0601 11:39:53.195599   25763 client.go:171] LocalClient.Create took 4.370034371s
	I0601 11:39:53.195625   25763 start.go:173] duration metric: libmachine.API.Create for "NoKubernetes-20220601113904-16804" took 4.370099683s
	I0601 11:39:53.195636   25763 start.go:306] post-start starting for "NoKubernetes-20220601113904-16804" (driver="docker")
	I0601 11:39:53.195643   25763 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 11:39:53.195747   25763 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 11:39:53.195818   25763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220601113904-16804
	I0601 11:39:53.271052   25763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55297 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/NoKubernetes-20220601113904-16804/id_rsa Username:docker}
	I0601 11:39:53.359444   25763 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 11:39:53.362954   25763 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 11:39:53.362968   25763 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 11:39:53.362973   25763 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 11:39:53.362979   25763 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 11:39:53.362985   25763 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 11:39:53.363074   25763 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 11:39:53.363206   25763 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem -> 168042.pem in /etc/ssl/certs
	I0601 11:39:53.363340   25763 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 11:39:53.370962   25763 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /etc/ssl/certs/168042.pem (1708 bytes)
	I0601 11:39:53.388312   25763 start.go:309] post-start completed in 192.665608ms
	I0601 11:39:53.388818   25763 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-20220601113904-16804
	I0601 11:39:53.459351   25763 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/NoKubernetes-20220601113904-16804/config.json ...
	I0601 11:39:53.459758   25763 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:39:53.459806   25763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220601113904-16804
	I0601 11:39:53.530390   25763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55297 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/NoKubernetes-20220601113904-16804/id_rsa Username:docker}
	I0601 11:39:53.614750   25763 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:39:53.619659   25763 start.go:134] duration metric: createHost completed in 4.837031552s
	I0601 11:39:53.619670   25763 start.go:81] releasing machines lock for "NoKubernetes-20220601113904-16804", held for 4.837112112s
	I0601 11:39:53.619726   25763 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-20220601113904-16804
	I0601 11:39:53.690281   25763 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 11:39:53.690367   25763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220601113904-16804
	I0601 11:39:53.690462   25763 ssh_runner.go:195] Run: systemctl --version
	I0601 11:39:53.690880   25763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220601113904-16804
	I0601 11:39:53.765765   25763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55297 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/NoKubernetes-20220601113904-16804/id_rsa Username:docker}
	I0601 11:39:53.767531   25763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55297 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/NoKubernetes-20220601113904-16804/id_rsa Username:docker}
	I0601 11:39:53.982503   25763 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 11:39:53.991803   25763 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 11:39:54.001459   25763 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 11:39:54.001511   25763 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 11:39:54.011177   25763 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 11:39:54.023613   25763 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 11:39:54.092271   25763 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 11:39:54.155068   25763 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 11:39:54.164990   25763 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 11:39:54.227562   25763 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 11:39:54.259828   25763 out.go:177] * Done! minikube is ready without Kubernetes!
	I0601 11:39:54.281058   25763 out.go:177] ╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│                        * Things to try without Kubernetes ...                         │
	│                                                                                       │
	│    - "minikube ssh" to SSH into minikube's node.                                      │
	│    - "minikube docker-env" to point your docker-cli to the docker inside minikube.    │
	│    - "minikube image" to build images without docker.                                 │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-01 18:38:37 UTC, end at Wed 2022-06-01 18:39:59 UTC. --
	Jun 01 18:38:39 pause-20220601113830-16804 dockerd[128]: time="2022-06-01T18:38:39.854097903Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 01 18:38:39 pause-20220601113830-16804 dockerd[128]: time="2022-06-01T18:38:39.854694776Z" level=info msg="Daemon shutdown complete"
	Jun 01 18:38:39 pause-20220601113830-16804 systemd[1]: docker.service: Succeeded.
	Jun 01 18:38:39 pause-20220601113830-16804 systemd[1]: Stopped Docker Application Container Engine.
	Jun 01 18:38:39 pause-20220601113830-16804 systemd[1]: Starting Docker Application Container Engine...
	Jun 01 18:38:39 pause-20220601113830-16804 dockerd[381]: time="2022-06-01T18:38:39.909515370Z" level=info msg="Starting up"
	Jun 01 18:38:39 pause-20220601113830-16804 dockerd[381]: time="2022-06-01T18:38:39.911270270Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 01 18:38:39 pause-20220601113830-16804 dockerd[381]: time="2022-06-01T18:38:39.911326525Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 01 18:38:39 pause-20220601113830-16804 dockerd[381]: time="2022-06-01T18:38:39.911347326Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 01 18:38:39 pause-20220601113830-16804 dockerd[381]: time="2022-06-01T18:38:39.911354624Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 01 18:38:39 pause-20220601113830-16804 dockerd[381]: time="2022-06-01T18:38:39.912414930Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 01 18:38:39 pause-20220601113830-16804 dockerd[381]: time="2022-06-01T18:38:39.912474935Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 01 18:38:39 pause-20220601113830-16804 dockerd[381]: time="2022-06-01T18:38:39.912524930Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 01 18:38:39 pause-20220601113830-16804 dockerd[381]: time="2022-06-01T18:38:39.912573963Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 01 18:38:39 pause-20220601113830-16804 dockerd[381]: time="2022-06-01T18:38:39.915784613Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jun 01 18:38:39 pause-20220601113830-16804 dockerd[381]: time="2022-06-01T18:38:39.919429423Z" level=info msg="Loading containers: start."
	Jun 01 18:38:39 pause-20220601113830-16804 dockerd[381]: time="2022-06-01T18:38:39.989972472Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 01 18:38:40 pause-20220601113830-16804 dockerd[381]: time="2022-06-01T18:38:40.019787975Z" level=info msg="Loading containers: done."
	Jun 01 18:38:40 pause-20220601113830-16804 dockerd[381]: time="2022-06-01T18:38:40.028941315Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	Jun 01 18:38:40 pause-20220601113830-16804 dockerd[381]: time="2022-06-01T18:38:40.028998052Z" level=info msg="Daemon has completed initialization"
	Jun 01 18:38:40 pause-20220601113830-16804 systemd[1]: Started Docker Application Container Engine.
	Jun 01 18:38:40 pause-20220601113830-16804 dockerd[381]: time="2022-06-01T18:38:40.051650854Z" level=info msg="API listen on [::]:2376"
	Jun 01 18:38:40 pause-20220601113830-16804 dockerd[381]: time="2022-06-01T18:38:40.054440777Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 01 18:39:16 pause-20220601113830-16804 dockerd[381]: time="2022-06-01T18:39:16.293968020Z" level=info msg="ignoring event" container=4c1f6a224bdfc3e2096dc221b613550b64acc690bd683f202dcd324e195971c6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:39:16 pause-20220601113830-16804 dockerd[381]: time="2022-06-01T18:39:16.491089911Z" level=info msg="ignoring event" container=e938afd989bbac4c2358be84248e3e10a5f89e6ca9cf2a80e9ddb0c6d58d0432 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE                  COMMAND                  CREATED              STATUS                       PORTS     NAMES
	b11f70bc6750   6e38f40d628d           "/storage-provisioner"   37 seconds ago       Up 36 seconds (Paused)                 k8s_storage-provisioner_storage-provisioner_kube-system_bb9073b5-2ca5-49ca-9293-480eca9fb479_0
	880dfe86b8f9   k8s.gcr.io/pause:3.6   "/pause"                 37 seconds ago       Up 36 seconds (Paused)                 k8s_POD_storage-provisioner_kube-system_bb9073b5-2ca5-49ca-9293-480eca9fb479_0
	61556c6361de   a4ca41631cc7           "/coredns -conf /etc…"   55 seconds ago       Up 54 seconds (Paused)                 k8s_coredns_coredns-64897985d-znpnh_kube-system_479b98bf-5229-4c9d-910d-653c81564dfa_0
	dd6a0a59fa3d   4c0375452406           "/usr/local/bin/kube…"   55 seconds ago       Up 54 seconds (Paused)                 k8s_kube-proxy_kube-proxy-jfbsj_kube-system_4d9bf3d8-f098-4e70-88d7-17ca3af3a9a6_0
	0a64b470dd51   k8s.gcr.io/pause:3.6   "/pause"                 55 seconds ago       Up 54 seconds (Paused)                 k8s_POD_coredns-64897985d-znpnh_kube-system_479b98bf-5229-4c9d-910d-653c81564dfa_0
	7c1137e06191   k8s.gcr.io/pause:3.6   "/pause"                 55 seconds ago       Up 55 seconds (Paused)                 k8s_POD_kube-proxy-jfbsj_kube-system_4d9bf3d8-f098-4e70-88d7-17ca3af3a9a6_0
	e938afd989bb   k8s.gcr.io/pause:3.6   "/pause"                 56 seconds ago       Exited (0) 44 seconds ago              k8s_POD_coredns-64897985d-2885n_kube-system_fbebbdb6-091e-49ed-b9c9-a9a2b9556ca6_0
	df3d249289a5   8fa62c12256d           "kube-apiserver --ad…"   About a minute ago   Up About a minute (Paused)             k8s_kube-apiserver_kube-apiserver-pause-20220601113830-16804_kube-system_badfac22d8d2d0da9ce0665a56a31bac_0
	a639c7582456   595f327f224a           "kube-scheduler --au…"   About a minute ago   Up About a minute (Paused)             k8s_kube-scheduler_kube-scheduler-pause-20220601113830-16804_kube-system_2ca79eb8487129f157bb1a6791ae0b83_0
	d6156fc5b81b   df7b72818ad2           "kube-controller-man…"   About a minute ago   Up About a minute (Paused)             k8s_kube-controller-manager_kube-controller-manager-pause-20220601113830-16804_kube-system_f2c87135393db070e81c9313553af4d2_0
	da0af86b49b9   25f8c7f3da61           "etcd --advertise-cl…"   About a minute ago   Up About a minute (Paused)             k8s_etcd_etcd-pause-20220601113830-16804_kube-system_152dae96131cf6db23f2ae8992cfe654_0
	3cd731c7cd13   k8s.gcr.io/pause:3.6   "/pause"                 About a minute ago   Up About a minute (Paused)             k8s_POD_kube-scheduler-pause-20220601113830-16804_kube-system_2ca79eb8487129f157bb1a6791ae0b83_0
	7b4247381385   k8s.gcr.io/pause:3.6   "/pause"                 About a minute ago   Up About a minute (Paused)             k8s_POD_kube-controller-manager-pause-20220601113830-16804_kube-system_f2c87135393db070e81c9313553af4d2_0
	6c9ba91b0efa   k8s.gcr.io/pause:3.6   "/pause"                 About a minute ago   Up About a minute (Paused)             k8s_POD_kube-apiserver-pause-20220601113830-16804_kube-system_badfac22d8d2d0da9ce0665a56a31bac_0
	dd915b016cc6   k8s.gcr.io/pause:3.6   "/pause"                 About a minute ago   Up About a minute (Paused)             k8s_POD_etcd-pause-20220601113830-16804_kube-system_152dae96131cf6db23f2ae8992cfe654_0
	time="2022-06-01T18:40:01Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> coredns [61556c6361de] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.001517] FS-Cache: O-key=[8] 'dadf850300000000'
	[  +0.001304] FS-Cache: N-cookie c=000000009a04840d [p=00000000c3c39397 fl=2 nc=0 na=1]
	[  +0.001865] FS-Cache: N-cookie d=00000000d2bd0c1d n=00000000fbe96da6
	[  +0.001445] FS-Cache: N-key=[8] 'dadf850300000000'
	[  +0.001975] FS-Cache: Duplicate cookie detected
	[  +0.001109] FS-Cache: O-cookie c=000000004f2e48df [p=00000000c3c39397 fl=226 nc=0 na=1]
	[  +0.001769] FS-Cache: O-cookie d=00000000d2bd0c1d n=0000000018717240
	[  +0.001448] FS-Cache: O-key=[8] 'dadf850300000000'
	[  +0.001135] FS-Cache: N-cookie c=000000009a04840d [p=00000000c3c39397 fl=2 nc=0 na=1]
	[  +0.002013] FS-Cache: N-cookie d=00000000d2bd0c1d n=00000000571aa914
	[  +0.001438] FS-Cache: N-key=[8] 'dadf850300000000'
	[  +3.424117] FS-Cache: Duplicate cookie detected
	[  +0.001104] FS-Cache: O-cookie c=0000000002754839 [p=00000000c3c39397 fl=226 nc=0 na=1]
	[  +0.001774] FS-Cache: O-cookie d=00000000d2bd0c1d n=0000000062f4d47d
	[  +0.001610] FS-Cache: O-key=[8] 'd9df850300000000'
	[  +0.001207] FS-Cache: N-cookie c=00000000143faa29 [p=00000000c3c39397 fl=2 nc=0 na=1]
	[  +0.001813] FS-Cache: N-cookie d=00000000d2bd0c1d n=000000004f0ade46
	[  +0.001508] FS-Cache: N-key=[8] 'd9df850300000000'
	[  +0.463678] FS-Cache: Duplicate cookie detected
	[  +0.001235] FS-Cache: O-cookie c=00000000f58304b4 [p=00000000c3c39397 fl=226 nc=0 na=1]
	[  +0.001831] FS-Cache: O-cookie d=00000000d2bd0c1d n=000000005c1310b5
	[  +0.001510] FS-Cache: O-key=[8] 'e4df850300000000'
	[  +0.001109] FS-Cache: N-cookie c=0000000057f6365e [p=00000000c3c39397 fl=2 nc=0 na=1]
	[  +0.002017] FS-Cache: N-cookie d=00000000d2bd0c1d n=00000000d6cb9dcd
	[  +0.001494] FS-Cache: N-key=[8] 'e4df850300000000'
	
	* 
	* ==> etcd [da0af86b49b9] <==
	* {"level":"info","ts":"2022-06-01T18:38:48.060Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T18:38:48.060Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-06-01T18:38:48.060Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T18:38:48.061Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2022-06-01T18:39:05.553Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"111.464822ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128013404354370783 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/admin\" mod_revision:92 > success:<request_put:<key:\"/registry/clusterroles/admin\" value_size:865 >> failure:<request_range:<key:\"/registry/clusterroles/admin\" > >>","response":"size:16"}
	{"level":"info","ts":"2022-06-01T18:39:05.553Z","caller":"traceutil/trace.go:171","msg":"trace[442931034] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"193.261134ms","start":"2022-06-01T18:39:05.360Z","end":"2022-06-01T18:39:05.553Z","steps":["trace[442931034] 'process raft request'  (duration: 193.115727ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-01T18:39:05.554Z","caller":"traceutil/trace.go:171","msg":"trace[1438836822] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"193.357136ms","start":"2022-06-01T18:39:05.360Z","end":"2022-06-01T18:39:05.553Z","steps":["trace[1438836822] 'process raft request'  (duration: 81.078086ms)","trace[1438836822] 'compare'  (duration: 111.249289ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-01T18:39:05.554Z","caller":"traceutil/trace.go:171","msg":"trace[524567291] linearizableReadLoop","detail":"{readStateIndex:405; appliedIndex:403; }","duration":"150.298422ms","start":"2022-06-01T18:39:05.403Z","end":"2022-06-01T18:39:05.553Z","steps":["trace[524567291] 'read index received'  (duration: 38.096439ms)","trace[524567291] 'applied index is now lower than readState.Index'  (duration: 112.201423ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T18:39:05.554Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"150.805526ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/bootstrap-signer\" ","response":"range_response_count:1 size:245"}
	{"level":"info","ts":"2022-06-01T18:39:05.554Z","caller":"traceutil/trace.go:171","msg":"trace[759079747] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/bootstrap-signer; range_end:; response_count:1; response_revision:393; }","duration":"150.93872ms","start":"2022-06-01T18:39:05.403Z","end":"2022-06-01T18:39:05.554Z","steps":["trace[759079747] 'agreement among raft nodes before linearized reading'  (duration: 150.783163ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-01T18:39:05.554Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"141.934646ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2022-06-01T18:39:05.554Z","caller":"traceutil/trace.go:171","msg":"trace[557467824] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:393; }","duration":"141.98517ms","start":"2022-06-01T18:39:05.412Z","end":"2022-06-01T18:39:05.554Z","steps":["trace[557467824] 'agreement among raft nodes before linearized reading'  (duration: 141.919756ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-01T18:39:05.554Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"101.847537ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" ","response":"range_response_count:1 size:260"}
	{"level":"info","ts":"2022-06-01T18:39:05.554Z","caller":"traceutil/trace.go:171","msg":"trace[2035693539] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:393; }","duration":"101.863075ms","start":"2022-06-01T18:39:05.452Z","end":"2022-06-01T18:39:05.554Z","steps":["trace[2035693539] 'agreement among raft nodes before linearized reading'  (duration: 101.8361ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-01T18:39:05.746Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"143.845074ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:263"}
	{"level":"info","ts":"2022-06-01T18:39:05.746Z","caller":"traceutil/trace.go:171","msg":"trace[260507785] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:400; }","duration":"143.900713ms","start":"2022-06-01T18:39:05.602Z","end":"2022-06-01T18:39:05.746Z","steps":["trace[260507785] 'agreement among raft nodes before linearized reading'  (duration: 68.615295ms)","trace[260507785] 'range keys from in-memory index tree'  (duration: 75.196792ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-01T18:39:05.746Z","caller":"traceutil/trace.go:171","msg":"trace[1891075114] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"115.681077ms","start":"2022-06-01T18:39:05.631Z","end":"2022-06-01T18:39:05.746Z","steps":["trace[1891075114] 'process raft request'  (duration: 40.373037ms)","trace[1891075114] 'compare'  (duration: 75.092483ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-01T18:39:05.747Z","caller":"traceutil/trace.go:171","msg":"trace[809357756] transaction","detail":"{read_only:false; response_revision:402; number_of_response:1; }","duration":"114.198631ms","start":"2022-06-01T18:39:05.632Z","end":"2022-06-01T18:39:05.747Z","steps":["trace[809357756] 'process raft request'  (duration: 113.979133ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-01T18:39:11.736Z","caller":"traceutil/trace.go:171","msg":"trace[1717503183] linearizableReadLoop","detail":"{readStateIndex:480; appliedIndex:479; }","duration":"228.96331ms","start":"2022-06-01T18:39:11.507Z","end":"2022-06-01T18:39:11.736Z","steps":["trace[1717503183] 'read index received'  (duration: 228.373556ms)","trace[1717503183] 'applied index is now lower than readState.Index'  (duration: 589.223µs)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T18:39:11.736Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"229.082008ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-01T18:39:11.736Z","caller":"traceutil/trace.go:171","msg":"trace[688579091] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:464; }","duration":"229.127346ms","start":"2022-06-01T18:39:11.507Z","end":"2022-06-01T18:39:11.736Z","steps":["trace[688579091] 'agreement among raft nodes before linearized reading'  (duration: 229.041323ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-01T18:39:11.736Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"171.381624ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-64897985d-2885n\" ","response":"range_response_count:1 size:4640"}
	{"level":"info","ts":"2022-06-01T18:39:11.736Z","caller":"traceutil/trace.go:171","msg":"trace[918921670] range","detail":"{range_begin:/registry/pods/kube-system/coredns-64897985d-2885n; range_end:; response_count:1; response_revision:464; }","duration":"171.475629ms","start":"2022-06-01T18:39:11.565Z","end":"2022-06-01T18:39:11.736Z","steps":["trace[918921670] 'agreement among raft nodes before linearized reading'  (duration: 171.358551ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-01T18:39:11.736Z","caller":"traceutil/trace.go:171","msg":"trace[1489556364] transaction","detail":"{read_only:false; response_revision:464; number_of_response:1; }","duration":"471.449918ms","start":"2022-06-01T18:39:11.264Z","end":"2022-06-01T18:39:11.736Z","steps":["trace[1489556364] 'process raft request'  (duration: 470.859097ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-01T18:39:11.737Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-01T18:39:11.264Z","time spent":"472.191597ms","remote":"127.0.0.1:52906","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.49.2\" mod_revision:322 > success:<request_put:<key:\"/registry/masterleases/192.168.49.2\" value_size:67 lease:8128013404354370928 >> failure:<request_range:<key:\"/registry/masterleases/192.168.49.2\" > >"}
	
	* 
	* ==> kernel <==
	*  18:40:11 up 42 min,  0 users,  load average: 1.20, 1.15, 1.05
	Linux pause-20220601113830-16804 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [df3d249289a5] <==
	* I0601 18:38:49.988921       1 cache.go:39] Caches are synced for autoregister controller
	I0601 18:38:49.988966       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0601 18:38:49.989444       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0601 18:38:49.995487       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0601 18:38:50.003370       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0601 18:38:50.026642       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0601 18:38:50.887787       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0601 18:38:50.893480       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0601 18:38:50.894943       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0601 18:38:50.895848       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0601 18:38:50.895890       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0601 18:38:51.203450       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0601 18:38:51.227685       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0601 18:38:51.353195       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0601 18:38:51.356964       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0601 18:38:51.357715       1 controller.go:611] quota admission added evaluator for: endpoints
	I0601 18:38:51.361572       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0601 18:38:52.037730       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0601 18:38:52.889100       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0601 18:38:52.896426       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0601 18:38:52.904408       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0601 18:38:53.128923       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0601 18:39:05.751200       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0601 18:39:05.751211       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0601 18:39:06.867169       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [d6156fc5b81b] <==
	* I0601 18:39:04.844942       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I0601 18:39:04.844954       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0601 18:39:04.923130       1 shared_informer.go:247] Caches are synced for cronjob 
	I0601 18:39:04.933530       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0601 18:39:05.005117       1 shared_informer.go:247] Caches are synced for deployment 
	I0601 18:39:05.005091       1 shared_informer.go:247] Caches are synced for taint 
	I0601 18:39:05.005318       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: 
	W0601 18:39:05.005384       1 node_lifecycle_controller.go:1012] Missing timestamp for Node pause-20220601113830-16804. Assuming now as a timestamp.
	I0601 18:39:05.005405       1 node_lifecycle_controller.go:1213] Controller detected that zone  is now in state Normal.
	I0601 18:39:05.005574       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0601 18:39:05.005803       1 event.go:294] "Event occurred" object="pause-20220601113830-16804" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20220601113830-16804 event: Registered Node pause-20220601113830-16804 in Controller"
	I0601 18:39:05.007274       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 18:39:05.009870       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 18:39:05.042885       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0601 18:39:05.048450       1 shared_informer.go:247] Caches are synced for disruption 
	I0601 18:39:05.048478       1 disruption.go:371] Sending events to api server.
	I0601 18:39:05.428855       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 18:39:05.454114       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 18:39:05.454145       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0601 18:39:05.785945       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0601 18:39:05.822858       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-jfbsj"
	I0601 18:39:05.833717       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-2885n"
	I0601 18:39:05.837665       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0601 18:39:05.841532       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-znpnh"
	I0601 18:39:05.860954       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-2885n"
	
	* 
	* ==> kube-proxy [dd6a0a59fa3d] <==
	* I0601 18:39:06.788243       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0601 18:39:06.788300       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0601 18:39:06.788344       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 18:39:06.862666       1 server_others.go:206] "Using iptables Proxier"
	I0601 18:39:06.862736       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 18:39:06.862745       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 18:39:06.862758       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 18:39:06.863180       1 server.go:656] "Version info" version="v1.23.6"
	I0601 18:39:06.865441       1 config.go:226] "Starting endpoint slice config controller"
	I0601 18:39:06.865444       1 config.go:317] "Starting service config controller"
	I0601 18:39:06.865538       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 18:39:06.865603       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 18:39:06.965959       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0601 18:39:06.967228       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [a639c7582456] <==
	* W0601 18:38:49.953006       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 18:38:49.953898       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0601 18:38:49.953393       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 18:38:49.953904       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0601 18:38:50.781440       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 18:38:50.781487       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0601 18:38:50.783645       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 18:38:50.783675       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0601 18:38:50.849632       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0601 18:38:50.849698       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0601 18:38:50.863153       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0601 18:38:50.863188       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0601 18:38:50.979166       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0601 18:38:50.979204       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0601 18:38:51.009206       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 18:38:51.009244       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0601 18:38:51.047147       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0601 18:38:51.047190       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0601 18:38:51.047331       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 18:38:51.047367       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0601 18:38:51.445288       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0601 18:38:52.952171       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0601 18:38:53.386187       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0601 18:38:53.386973       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0601 18:38:53.455370       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 18:38:37 UTC, end at Wed 2022-06-01 18:40:12 UTC. --
	Jun 01 18:39:16 pause-20220601113830-16804 kubelet[1777]: I0601 18:39:16.582573    1777 scope.go:110] "RemoveContainer" containerID="4c1f6a224bdfc3e2096dc221b613550b64acc690bd683f202dcd324e195971c6"
	Jun 01 18:39:16 pause-20220601113830-16804 kubelet[1777]: I0601 18:39:16.589341    1777 scope.go:110] "RemoveContainer" containerID="4c1f6a224bdfc3e2096dc221b613550b64acc690bd683f202dcd324e195971c6"
	Jun 01 18:39:16 pause-20220601113830-16804 kubelet[1777]: E0601 18:39:16.590179    1777 remote_runtime.go:572] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: 4c1f6a224bdfc3e2096dc221b613550b64acc690bd683f202dcd324e195971c6" containerID="4c1f6a224bdfc3e2096dc221b613550b64acc690bd683f202dcd324e195971c6"
	Jun 01 18:39:16 pause-20220601113830-16804 kubelet[1777]: I0601 18:39:16.590233    1777 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:docker ID:4c1f6a224bdfc3e2096dc221b613550b64acc690bd683f202dcd324e195971c6} err="failed to get container status \"4c1f6a224bdfc3e2096dc221b613550b64acc690bd683f202dcd324e195971c6\": rpc error: code = Unknown desc = Error: No such container: 4c1f6a224bdfc3e2096dc221b613550b64acc690bd683f202dcd324e195971c6"
	Jun 01 18:39:16 pause-20220601113830-16804 kubelet[1777]: I0601 18:39:16.608284    1777 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbebbdb6-091e-49ed-b9c9-a9a2b9556ca6-config-volume\") pod \"fbebbdb6-091e-49ed-b9c9-a9a2b9556ca6\" (UID: \"fbebbdb6-091e-49ed-b9c9-a9a2b9556ca6\") "
	Jun 01 18:39:16 pause-20220601113830-16804 kubelet[1777]: I0601 18:39:16.608338    1777 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdjgk\" (UniqueName: \"kubernetes.io/projected/fbebbdb6-091e-49ed-b9c9-a9a2b9556ca6-kube-api-access-hdjgk\") pod \"fbebbdb6-091e-49ed-b9c9-a9a2b9556ca6\" (UID: \"fbebbdb6-091e-49ed-b9c9-a9a2b9556ca6\") "
	Jun 01 18:39:16 pause-20220601113830-16804 kubelet[1777]: W0601 18:39:16.608480    1777 empty_dir.go:517] Warning: Failed to clear quota on /var/lib/kubelet/pods/fbebbdb6-091e-49ed-b9c9-a9a2b9556ca6/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Jun 01 18:39:16 pause-20220601113830-16804 kubelet[1777]: I0601 18:39:16.608594    1777 operation_generator.go:910] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbebbdb6-091e-49ed-b9c9-a9a2b9556ca6-config-volume" (OuterVolumeSpecName: "config-volume") pod "fbebbdb6-091e-49ed-b9c9-a9a2b9556ca6" (UID: "fbebbdb6-091e-49ed-b9c9-a9a2b9556ca6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Jun 01 18:39:16 pause-20220601113830-16804 kubelet[1777]: I0601 18:39:16.610789    1777 operation_generator.go:910] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbebbdb6-091e-49ed-b9c9-a9a2b9556ca6-kube-api-access-hdjgk" (OuterVolumeSpecName: "kube-api-access-hdjgk") pod "fbebbdb6-091e-49ed-b9c9-a9a2b9556ca6" (UID: "fbebbdb6-091e-49ed-b9c9-a9a2b9556ca6"). InnerVolumeSpecName "kube-api-access-hdjgk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 01 18:39:16 pause-20220601113830-16804 kubelet[1777]: I0601 18:39:16.708855    1777 reconciler.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbebbdb6-091e-49ed-b9c9-a9a2b9556ca6-config-volume\") on node \"pause-20220601113830-16804\" DevicePath \"\""
	Jun 01 18:39:16 pause-20220601113830-16804 kubelet[1777]: I0601 18:39:16.708905    1777 reconciler.go:300] "Volume detached for volume \"kube-api-access-hdjgk\" (UniqueName: \"kubernetes.io/projected/fbebbdb6-091e-49ed-b9c9-a9a2b9556ca6-kube-api-access-hdjgk\") on node \"pause-20220601113830-16804\" DevicePath \"\""
	Jun 01 18:39:17 pause-20220601113830-16804 kubelet[1777]: E0601 18:39:17.199397    1777 remote_runtime.go:479] "StopContainer from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 4c1f6a224bdfc3e2096dc221b613550b64acc690bd683f202dcd324e195971c6" containerID="4c1f6a224bdfc3e2096dc221b613550b64acc690bd683f202dcd324e195971c6"
	Jun 01 18:39:17 pause-20220601113830-16804 kubelet[1777]: E0601 18:39:17.199463    1777 kuberuntime_container.go:728] "Container termination failed with gracePeriod" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 4c1f6a224bdfc3e2096dc221b613550b64acc690bd683f202dcd324e195971c6" pod="kube-system/coredns-64897985d-2885n" podUID=fbebbdb6-091e-49ed-b9c9-a9a2b9556ca6 containerName="coredns" containerID="docker://4c1f6a224bdfc3e2096dc221b613550b64acc690bd683f202dcd324e195971c6" gracePeriod=1
	Jun 01 18:39:17 pause-20220601113830-16804 kubelet[1777]: E0601 18:39:17.199477    1777 kuberuntime_container.go:753] "Kill container failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 4c1f6a224bdfc3e2096dc221b613550b64acc690bd683f202dcd324e195971c6" pod="kube-system/coredns-64897985d-2885n" podUID=fbebbdb6-091e-49ed-b9c9-a9a2b9556ca6 containerName="coredns" containerID={Type:docker ID:4c1f6a224bdfc3e2096dc221b613550b64acc690bd683f202dcd324e195971c6}
	Jun 01 18:39:17 pause-20220601113830-16804 kubelet[1777]: E0601 18:39:17.201333    1777 kubelet.go:1808] failed to "KillContainer" for "coredns" with KillContainerError: "rpc error: code = Unknown desc = Error response from daemon: No such container: 4c1f6a224bdfc3e2096dc221b613550b64acc690bd683f202dcd324e195971c6"
	Jun 01 18:39:17 pause-20220601113830-16804 kubelet[1777]: E0601 18:39:17.201358    1777 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"coredns\" with KillContainerError: \"rpc error: code = Unknown desc = Error response from daemon: No such container: 4c1f6a224bdfc3e2096dc221b613550b64acc690bd683f202dcd324e195971c6\"" pod="kube-system/coredns-64897985d-2885n" podUID=fbebbdb6-091e-49ed-b9c9-a9a2b9556ca6
	Jun 01 18:39:17 pause-20220601113830-16804 kubelet[1777]: I0601 18:39:17.201903    1777 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=fbebbdb6-091e-49ed-b9c9-a9a2b9556ca6 path="/var/lib/kubelet/pods/fbebbdb6-091e-49ed-b9c9-a9a2b9556ca6/volumes"
	Jun 01 18:39:23 pause-20220601113830-16804 kubelet[1777]: I0601 18:39:23.874330    1777 topology_manager.go:200] "Topology Admit Handler"
	Jun 01 18:39:24 pause-20220601113830-16804 kubelet[1777]: I0601 18:39:24.058220    1777 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bb9073b5-2ca5-49ca-9293-480eca9fb479-tmp\") pod \"storage-provisioner\" (UID: \"bb9073b5-2ca5-49ca-9293-480eca9fb479\") " pod="kube-system/storage-provisioner"
	Jun 01 18:39:24 pause-20220601113830-16804 kubelet[1777]: I0601 18:39:24.058256    1777 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-689gs\" (UniqueName: \"kubernetes.io/projected/bb9073b5-2ca5-49ca-9293-480eca9fb479-kube-api-access-689gs\") pod \"storage-provisioner\" (UID: \"bb9073b5-2ca5-49ca-9293-480eca9fb479\") " pod="kube-system/storage-provisioner"
	Jun 01 18:39:25 pause-20220601113830-16804 kubelet[1777]: I0601 18:39:25.762019    1777 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Jun 01 18:39:25 pause-20220601113830-16804 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Jun 01 18:39:25 pause-20220601113830-16804 systemd[1]: kubelet.service: Succeeded.
	Jun 01 18:39:25 pause-20220601113830-16804 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 01 18:39:25 pause-20220601113830-16804 systemd[1]: kubelet.service: Consumed 1.203s CPU time.
	
	* 
	* ==> storage-provisioner [b11f70bc6750] <==
	* I0601 18:39:24.432769       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0601 18:39:24.442730       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0601 18:39:24.442779       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0601 18:39:24.454242       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0601 18:39:24.454403       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20220601113830-16804_e8504f9a-11e1-4285-ba1b-8b5b72a28e0d!
	I0601 18:39:24.455338       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"efea0200-05ea-4cd6-b593-147c98e1da4c", APIVersion:"v1", ResourceVersion:"491", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20220601113830-16804_e8504f9a-11e1-4285-ba1b-8b5b72a28e0d became leader
	I0601 18:39:24.554944       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20220601113830-16804_e8504f9a-11e1-4285-ba1b-8b5b72a28e0d!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:40:11.544056   25881 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-20220601113830-16804 -n pause-20220601113830-16804

                                                
                                                
=== CONT  TestPause/serial/VerifyStatus
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-20220601113830-16804 -n pause-20220601113830-16804: exit status 2 (16.101568818s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "pause-20220601113830-16804" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestPause/serial/VerifyStatus (62.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (250.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20220601114806-16804 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0601 11:48:06.303898   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601113006-16804/client.crt: no such file or directory
E0601 11:48:08.926414   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601113006-16804/client.crt: no such file or directory
E0601 11:48:14.046590   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601113006-16804/client.crt: no such file or directory
E0601 11:48:14.590597   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
E0601 11:48:24.288788   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601113006-16804/client.crt: no such file or directory
E0601 11:48:44.769755   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601113006-16804/client.crt: no such file or directory
E0601 11:49:07.555086   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601113005-16804/client.crt: no such file or directory
E0601 11:49:07.876880   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory
E0601 11:49:25.731832   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601113006-16804/client.crt: no such file or directory
E0601 11:49:32.642302   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601113006-16804/client.crt: no such file or directory
E0601 11:49:32.648770   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601113006-16804/client.crt: no such file or directory
E0601 11:49:32.660982   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601113006-16804/client.crt: no such file or directory
E0601 11:49:32.683185   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601113006-16804/client.crt: no such file or directory
E0601 11:49:32.724316   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601113006-16804/client.crt: no such file or directory
E0601 11:49:32.806612   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601113006-16804/client.crt: no such file or directory
E0601 11:49:32.968092   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601113006-16804/client.crt: no such file or directory
E0601 11:49:33.289493   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601113006-16804/client.crt: no such file or directory
E0601 11:49:33.930231   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601113006-16804/client.crt: no such file or directory
E0601 11:49:35.210391   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601113006-16804/client.crt: no such file or directory
E0601 11:49:37.770886   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601113006-16804/client.crt: no such file or directory
E0601 11:49:38.190222   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601112852-16804/client.crt: no such file or directory
E0601 11:49:42.891418   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601113006-16804/client.crt: no such file or directory
E0601 11:49:53.133611   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601113006-16804/client.crt: no such file or directory
E0601 11:50:13.614205   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601113006-16804/client.crt: no such file or directory
E0601 11:50:20.700563   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601113004-16804/client.crt: no such file or directory
E0601 11:50:20.705954   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601113004-16804/client.crt: no such file or directory
E0601 11:50:20.718068   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601113004-16804/client.crt: no such file or directory
E0601 11:50:20.738339   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601113004-16804/client.crt: no such file or directory
E0601 11:50:20.778492   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601113004-16804/client.crt: no such file or directory
E0601 11:50:20.858682   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601113004-16804/client.crt: no such file or directory
E0601 11:50:21.020430   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601113004-16804/client.crt: no such file or directory
E0601 11:50:21.340874   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601113004-16804/client.crt: no such file or directory
E0601 11:50:22.009685   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601113004-16804/client.crt: no such file or directory
E0601 11:50:23.289978   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601113004-16804/client.crt: no such file or directory
E0601 11:50:25.850101   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601113004-16804/client.crt: no such file or directory
E0601 11:50:30.970450   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601113004-16804/client.crt: no such file or directory
E0601 11:50:41.211734   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601113004-16804/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-20220601114806-16804 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m9.897001644s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220601114806-16804] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node old-k8s-version-20220601114806-16804 in cluster old-k8s-version-20220601114806-16804
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:48:06.344775   27574 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:48:06.366220   27574 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:48:06.366239   27574 out.go:309] Setting ErrFile to fd 2...
	I0601 11:48:06.366251   27574 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:48:06.366473   27574 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:48:06.367193   27574 out.go:303] Setting JSON to false
	I0601 11:48:06.384465   27574 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":8256,"bootTime":1654101030,"procs":350,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 11:48:06.384559   27574 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:48:06.406120   27574 out.go:177] * [old-k8s-version-20220601114806-16804] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 11:48:06.449304   27574 notify.go:193] Checking for updates...
	I0601 11:48:06.471046   27574 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:48:06.492918   27574 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:48:06.514158   27574 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 11:48:06.536316   27574 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:48:06.558094   27574 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:48:06.580746   27574 config.go:178] Loaded profile config "enable-default-cni-20220601113004-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:48:06.580839   27574 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:48:06.652013   27574 docker.go:137] docker version: linux-20.10.14
	I0601 11:48:06.652150   27574 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:48:06.777354   27574 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:46 SystemTime:2022-06-01 18:48:06.712794712 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:48:06.800361   27574 out.go:177] * Using the docker driver based on user configuration
	I0601 11:48:06.822160   27574 start.go:284] selected driver: docker
	I0601 11:48:06.822219   27574 start.go:806] validating driver "docker" against <nil>
	I0601 11:48:06.822247   27574 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:48:06.825696   27574 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:48:06.949750   27574 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:46 SystemTime:2022-06-01 18:48:06.886420433 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:48:06.949859   27574 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 11:48:06.950033   27574 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:48:06.972038   27574 out.go:177] * Using Docker Desktop driver with the root privilege
	I0601 11:48:06.993641   27574 cni.go:95] Creating CNI manager for ""
	I0601 11:48:06.993696   27574 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:48:06.993715   27574 start_flags.go:306] config:
	{Name:old-k8s-version-20220601114806-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601114806-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:48:07.015667   27574 out.go:177] * Starting control plane node old-k8s-version-20220601114806-16804 in cluster old-k8s-version-20220601114806-16804
	I0601 11:48:07.057523   27574 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:48:07.078620   27574 out.go:177] * Pulling base image ...
	I0601 11:48:07.121369   27574 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 11:48:07.121425   27574 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:48:07.121448   27574 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0601 11:48:07.121480   27574 cache.go:57] Caching tarball of preloaded images
	I0601 11:48:07.121659   27574 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:48:07.121677   27574 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0601 11:48:07.122577   27574 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/config.json ...
	I0601 11:48:07.122671   27574 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/config.json: {Name:mk55ca0ff79972dfa1552e5f6d3cee7bbd1202ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:48:07.187817   27574 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 11:48:07.187836   27574 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 11:48:07.187844   27574 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:48:07.187968   27574 start.go:352] acquiring machines lock for old-k8s-version-20220601114806-16804: {Name:mke97f71f3781c3324662a5c4576dc1a6ff166e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:48:07.188111   27574 start.go:356] acquired machines lock for "old-k8s-version-20220601114806-16804" in 131.717µs
	I0601 11:48:07.188139   27574 start.go:91] Provisioning new machine with config: &{Name:old-k8s-version-20220601114806-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601114806-16804
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 11:48:07.188207   27574 start.go:131] createHost starting for "" (driver="docker")
	I0601 11:48:07.210206   27574 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0601 11:48:07.210726   27574 start.go:165] libmachine.API.Create for "old-k8s-version-20220601114806-16804" (driver="docker")
	I0601 11:48:07.210774   27574 client.go:168] LocalClient.Create starting
	I0601 11:48:07.210915   27574 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem
	I0601 11:48:07.210994   27574 main.go:134] libmachine: Decoding PEM data...
	I0601 11:48:07.211030   27574 main.go:134] libmachine: Parsing certificate...
	I0601 11:48:07.211138   27574 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem
	I0601 11:48:07.211186   27574 main.go:134] libmachine: Decoding PEM data...
	I0601 11:48:07.211208   27574 main.go:134] libmachine: Parsing certificate...
	I0601 11:48:07.212779   27574 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220601114806-16804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0601 11:48:07.276102   27574 cli_runner.go:211] docker network inspect old-k8s-version-20220601114806-16804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0601 11:48:07.276182   27574 network_create.go:272] running [docker network inspect old-k8s-version-20220601114806-16804] to gather additional debugging logs...
	I0601 11:48:07.276197   27574 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220601114806-16804
	W0601 11:48:07.337517   27574 cli_runner.go:211] docker network inspect old-k8s-version-20220601114806-16804 returned with exit code 1
	I0601 11:48:07.337548   27574 network_create.go:275] error running [docker network inspect old-k8s-version-20220601114806-16804]: docker network inspect old-k8s-version-20220601114806-16804: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220601114806-16804
	I0601 11:48:07.337589   27574 network_create.go:277] output of [docker network inspect old-k8s-version-20220601114806-16804]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220601114806-16804
	
	** /stderr **
	I0601 11:48:07.337662   27574 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0601 11:48:07.399506   27574 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000170058] misses:0}
	I0601 11:48:07.399549   27574 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0601 11:48:07.399571   27574 network_create.go:115] attempt to create docker network old-k8s-version-20220601114806-16804 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0601 11:48:07.399653   27574 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220601114806-16804
	I0601 11:48:07.495250   27574 network_create.go:99] docker network old-k8s-version-20220601114806-16804 192.168.49.0/24 created
	I0601 11:48:07.495284   27574 kic.go:106] calculated static IP "192.168.49.2" for the "old-k8s-version-20220601114806-16804" container
	I0601 11:48:07.495363   27574 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0601 11:48:07.560575   27574 cli_runner.go:164] Run: docker volume create old-k8s-version-20220601114806-16804 --label name.minikube.sigs.k8s.io=old-k8s-version-20220601114806-16804 --label created_by.minikube.sigs.k8s.io=true
	I0601 11:48:07.624268   27574 oci.go:103] Successfully created a docker volume old-k8s-version-20220601114806-16804
	I0601 11:48:07.624400   27574 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-20220601114806-16804-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220601114806-16804 --entrypoint /usr/bin/test -v old-k8s-version-20220601114806-16804:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -d /var/lib
	I0601 11:48:08.078523   27574 oci.go:107] Successfully prepared a docker volume old-k8s-version-20220601114806-16804
	I0601 11:48:08.078605   27574 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 11:48:08.078626   27574 kic.go:179] Starting extracting preloaded images to volume ...
	I0601 11:48:08.078727   27574 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220601114806-16804:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir
	I0601 11:48:12.263676   27574 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220601114806-16804:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a -I lz4 -xf /preloaded.tar -C /extractDir: (4.184887254s)
	I0601 11:48:12.263697   27574 kic.go:188] duration metric: took 4.185104 seconds to extract preloaded images to volume
	I0601 11:48:12.263798   27574 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0601 11:48:12.389307   27574 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220601114806-16804 --name old-k8s-version-20220601114806-16804 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220601114806-16804 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220601114806-16804 --network old-k8s-version-20220601114806-16804 --ip 192.168.49.2 --volume old-k8s-version-20220601114806-16804:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a
	I0601 11:48:12.765357   27574 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601114806-16804 --format={{.State.Running}}
	I0601 11:48:12.839335   27574 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601114806-16804 --format={{.State.Status}}
	I0601 11:48:12.920148   27574 cli_runner.go:164] Run: docker exec old-k8s-version-20220601114806-16804 stat /var/lib/dpkg/alternatives/iptables
	I0601 11:48:13.055365   27574 oci.go:247] the created container "old-k8s-version-20220601114806-16804" has a running status.
	I0601 11:48:13.055407   27574 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601114806-16804/id_rsa...
	I0601 11:48:13.315268   27574 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601114806-16804/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0601 11:48:13.425510   27574 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601114806-16804 --format={{.State.Status}}
	I0601 11:48:13.492129   27574 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0601 11:48:13.492145   27574 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220601114806-16804 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0601 11:48:13.632075   27574 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601114806-16804 --format={{.State.Status}}
	I0601 11:48:13.703671   27574 machine.go:88] provisioning docker machine ...
	I0601 11:48:13.703709   27574 ubuntu.go:169] provisioning hostname "old-k8s-version-20220601114806-16804"
	I0601 11:48:13.703792   27574 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:48:13.773978   27574 main.go:134] libmachine: Using SSH client type: native
	I0601 11:48:13.774194   27574 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 58779 <nil> <nil>}
	I0601 11:48:13.774208   27574 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220601114806-16804 && echo "old-k8s-version-20220601114806-16804" | sudo tee /etc/hostname
	I0601 11:48:13.900999   27574 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220601114806-16804
	
	I0601 11:48:13.901066   27574 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:48:13.969263   27574 main.go:134] libmachine: Using SSH client type: native
	I0601 11:48:13.969405   27574 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 58779 <nil> <nil>}
	I0601 11:48:13.969420   27574 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220601114806-16804' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220601114806-16804/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220601114806-16804' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 11:48:14.093369   27574 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:48:14.093391   27574 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 11:48:14.093423   27574 ubuntu.go:177] setting up certificates
	I0601 11:48:14.093430   27574 provision.go:83] configureAuth start
	I0601 11:48:14.093493   27574 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220601114806-16804
	I0601 11:48:14.160581   27574 provision.go:138] copyHostCerts
	I0601 11:48:14.160663   27574 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 11:48:14.160671   27574 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 11:48:14.160778   27574 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 11:48:14.160972   27574 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 11:48:14.160982   27574 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 11:48:14.161040   27574 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 11:48:14.161224   27574 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 11:48:14.161231   27574 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 11:48:14.161292   27574 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1675 bytes)
	I0601 11:48:14.161414   27574 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220601114806-16804 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220601114806-16804]
	I0601 11:48:14.218725   27574 provision.go:172] copyRemoteCerts
	I0601 11:48:14.218771   27574 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 11:48:14.218812   27574 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:48:14.288265   27574 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58779 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601114806-16804/id_rsa Username:docker}
	I0601 11:48:14.375125   27574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 11:48:14.392659   27574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0601 11:48:14.409850   27574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0601 11:48:14.427082   27574 provision.go:86] duration metric: configureAuth took 333.642296ms
	I0601 11:48:14.427096   27574 ubuntu.go:193] setting minikube options for container-runtime
	I0601 11:48:14.427247   27574 config.go:178] Loaded profile config "old-k8s-version-20220601114806-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0601 11:48:14.427307   27574 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:48:14.496613   27574 main.go:134] libmachine: Using SSH client type: native
	I0601 11:48:14.496794   27574 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 58779 <nil> <nil>}
	I0601 11:48:14.496810   27574 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 11:48:14.615504   27574 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 11:48:14.615517   27574 ubuntu.go:71] root file system type: overlay
	I0601 11:48:14.615685   27574 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 11:48:14.615787   27574 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:48:14.684081   27574 main.go:134] libmachine: Using SSH client type: native
	I0601 11:48:14.684266   27574 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 58779 <nil> <nil>}
	I0601 11:48:14.684318   27574 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 11:48:14.812401   27574 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 11:48:14.812478   27574 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:48:14.881022   27574 main.go:134] libmachine: Using SSH client type: native
	I0601 11:48:14.881178   27574 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 58779 <nil> <nil>}
	I0601 11:48:14.881193   27574 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 11:48:15.453741   27574 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 09:15:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-01 18:48:14.820308965 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0601 11:48:15.453771   27574 machine.go:91] provisioned docker machine in 1.750093306s
	I0601 11:48:15.453795   27574 client.go:171] LocalClient.Create took 8.243058125s
	I0601 11:48:15.453823   27574 start.go:173] duration metric: libmachine.API.Create for "old-k8s-version-20220601114806-16804" took 8.243153386s
	I0601 11:48:15.453844   27574 start.go:306] post-start starting for "old-k8s-version-20220601114806-16804" (driver="docker")
	I0601 11:48:15.453860   27574 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 11:48:15.453946   27574 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 11:48:15.454017   27574 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:48:15.523338   27574 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58779 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601114806-16804/id_rsa Username:docker}
	I0601 11:48:15.614530   27574 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 11:48:15.617747   27574 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 11:48:15.617764   27574 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 11:48:15.617771   27574 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 11:48:15.617775   27574 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 11:48:15.617783   27574 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 11:48:15.617902   27574 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 11:48:15.618055   27574 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem -> 168042.pem in /etc/ssl/certs
	I0601 11:48:15.618206   27574 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 11:48:15.625380   27574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /etc/ssl/certs/168042.pem (1708 bytes)
	I0601 11:48:15.642415   27574 start.go:309] post-start completed in 188.55032ms
	I0601 11:48:15.642945   27574 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220601114806-16804
	I0601 11:48:15.711118   27574 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/config.json ...
	I0601 11:48:15.711522   27574 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:48:15.711571   27574 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:48:15.779894   27574 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58779 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601114806-16804/id_rsa Username:docker}
	I0601 11:48:15.863658   27574 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:48:15.868229   27574 start.go:134] duration metric: createHost completed in 8.680059555s
	I0601 11:48:15.868246   27574 start.go:81] releasing machines lock for "old-k8s-version-20220601114806-16804", held for 8.680184841s
	I0601 11:48:15.868333   27574 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220601114806-16804
	I0601 11:48:15.936389   27574 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 11:48:15.936396   27574 ssh_runner.go:195] Run: systemctl --version
	I0601 11:48:15.936446   27574 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:48:15.936458   27574 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:48:16.009761   27574 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58779 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601114806-16804/id_rsa Username:docker}
	I0601 11:48:16.011651   27574 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58779 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601114806-16804/id_rsa Username:docker}
	I0601 11:48:16.230273   27574 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 11:48:16.239417   27574 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 11:48:16.248787   27574 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 11:48:16.248840   27574 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 11:48:16.257849   27574 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 11:48:16.270679   27574 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 11:48:16.341070   27574 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 11:48:16.404537   27574 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 11:48:16.414195   27574 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 11:48:16.482181   27574 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 11:48:16.491625   27574 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 11:48:16.525643   27574 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 11:48:16.584687   27574 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.16 ...
	I0601 11:48:16.584883   27574 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220601114806-16804 dig +short host.docker.internal
	I0601 11:48:16.725688   27574 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 11:48:16.726060   27574 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 11:48:16.731524   27574 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:48:16.741625   27574 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:48:16.811950   27574 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 11:48:16.812019   27574 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 11:48:16.842631   27574 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0601 11:48:16.842646   27574 docker.go:541] Images already preloaded, skipping extraction
	I0601 11:48:16.842712   27574 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 11:48:16.873329   27574 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0601 11:48:16.873366   27574 cache_images.go:84] Images are preloaded, skipping loading
	I0601 11:48:16.873461   27574 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 11:48:16.944301   27574 cni.go:95] Creating CNI manager for ""
	I0601 11:48:16.944313   27574 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:48:16.944325   27574 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 11:48:16.944336   27574 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220601114806-16804 NodeName:old-k8s-version-20220601114806-16804 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 11:48:16.944449   27574 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220601114806-16804"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220601114806-16804
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.49.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 11:48:16.944533   27574 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220601114806-16804 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601114806-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 11:48:16.944590   27574 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0601 11:48:16.952463   27574 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 11:48:16.952510   27574 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 11:48:16.959656   27574 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0601 11:48:16.973025   27574 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 11:48:16.985993   27574 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0601 11:48:16.998626   27574 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 11:48:17.002476   27574 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:48:17.011691   27574 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804 for IP: 192.168.49.2
	I0601 11:48:17.011821   27574 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 11:48:17.011867   27574 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 11:48:17.011912   27574 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/client.key
	I0601 11:48:17.011925   27574 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/client.crt with IP's: []
	I0601 11:48:17.055403   27574 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/client.crt ...
	I0601 11:48:17.055412   27574 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/client.crt: {Name:mk56a19e7e568e76f981b761f0feeac30209fed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:48:17.055726   27574 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/client.key ...
	I0601 11:48:17.055734   27574 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/client.key: {Name:mk425ada7b1a4b8283fc990b5fe867b2ef0a4cc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:48:17.055940   27574 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/apiserver.key.dd3b5fb2
	I0601 11:48:17.055955   27574 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0601 11:48:17.123017   27574 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/apiserver.crt.dd3b5fb2 ...
	I0601 11:48:17.123030   27574 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/apiserver.crt.dd3b5fb2: {Name:mke5c9565422107df61d86d8eb589801ff7c201f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:48:17.123253   27574 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/apiserver.key.dd3b5fb2 ...
	I0601 11:48:17.123261   27574 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/apiserver.key.dd3b5fb2: {Name:mkaf5a1508a1799324e236d6c7098d7f8cea374d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:48:17.123445   27574 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/apiserver.crt
	I0601 11:48:17.123620   27574 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/apiserver.key
	I0601 11:48:17.123782   27574 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/proxy-client.key
	I0601 11:48:17.123797   27574 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/proxy-client.crt with IP's: []
	I0601 11:48:17.242545   27574 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/proxy-client.crt ...
	I0601 11:48:17.242559   27574 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/proxy-client.crt: {Name:mk15626badc03822bc7c1c94cc21b184344cfc72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:48:17.242815   27574 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/proxy-client.key ...
	I0601 11:48:17.242829   27574 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/proxy-client.key: {Name:mk5799fc9ae7c9da785ee89dfc72eaa6f9f6fc01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:48:17.243209   27574 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem (1338 bytes)
	W0601 11:48:17.243249   27574 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804_empty.pem, impossibly tiny 0 bytes
	I0601 11:48:17.243257   27574 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1675 bytes)
	I0601 11:48:17.243286   27574 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 11:48:17.243313   27574 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 11:48:17.243342   27574 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1675 bytes)
	I0601 11:48:17.243406   27574 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem (1708 bytes)
	I0601 11:48:17.243882   27574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 11:48:17.262497   27574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0601 11:48:17.279237   27574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 11:48:17.297394   27574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 11:48:17.314508   27574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 11:48:17.331693   27574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 11:48:17.348958   27574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 11:48:17.366438   27574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0601 11:48:17.383641   27574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /usr/share/ca-certificates/168042.pem (1708 bytes)
	I0601 11:48:17.400599   27574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 11:48:17.418181   27574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem --> /usr/share/ca-certificates/16804.pem (1338 bytes)
	I0601 11:48:17.435850   27574 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 11:48:17.448690   27574 ssh_runner.go:195] Run: openssl version
	I0601 11:48:17.453606   27574 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168042.pem && ln -fs /usr/share/ca-certificates/168042.pem /etc/ssl/certs/168042.pem"
	I0601 11:48:17.461147   27574 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168042.pem
	I0601 11:48:17.465075   27574 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 18:01 /usr/share/ca-certificates/168042.pem
	I0601 11:48:17.465121   27574 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168042.pem
	I0601 11:48:17.470626   27574 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168042.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 11:48:17.478607   27574 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 11:48:17.487358   27574 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:48:17.491370   27574 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:48:17.491421   27574 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:48:17.496698   27574 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 11:48:17.504544   27574 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16804.pem && ln -fs /usr/share/ca-certificates/16804.pem /etc/ssl/certs/16804.pem"
	I0601 11:48:17.512826   27574 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16804.pem
	I0601 11:48:17.517127   27574 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 18:01 /usr/share/ca-certificates/16804.pem
	I0601 11:48:17.517164   27574 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16804.pem
	I0601 11:48:17.523408   27574 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16804.pem /etc/ssl/certs/51391683.0"
	I0601 11:48:17.531824   27574 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220601114806-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601114806-16804 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false}
	I0601 11:48:17.531920   27574 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 11:48:17.560148   27574 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 11:48:17.568169   27574 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:48:17.575507   27574 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:48:17.575558   27574 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:48:17.582643   27574 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:48:17.582670   27574 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:48:18.300623   27574 out.go:204]   - Generating certificates and keys ...
	I0601 11:48:21.431579   27574 out.go:204]   - Booting up control plane ...
	W0601 11:50:16.372943   27574 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-20220601114806-16804 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-20220601114806-16804 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-20220601114806-16804 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-20220601114806-16804 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0601 11:50:16.372977   27574 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 11:50:16.810356   27574 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:50:16.820615   27574 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:50:16.820653   27574 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:50:16.828894   27574 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:50:16.828919   27574 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:50:17.660372   27574 out.go:204]   - Generating certificates and keys ...
	I0601 11:50:18.553841   27574 out.go:204]   - Booting up control plane ...
	I0601 11:52:13.471406   27574 kubeadm.go:397] StartCluster complete in 3m55.941196979s
	I0601 11:52:13.471486   27574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:52:13.503409   27574 logs.go:274] 0 containers: []
	W0601 11:52:13.503441   27574 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:52:13.503499   27574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:52:13.535257   27574 logs.go:274] 0 containers: []
	W0601 11:52:13.535270   27574 logs.go:276] No container was found matching "etcd"
	I0601 11:52:13.535329   27574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:52:13.573955   27574 logs.go:274] 0 containers: []
	W0601 11:52:13.573968   27574 logs.go:276] No container was found matching "coredns"
	I0601 11:52:13.574034   27574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:52:13.607720   27574 logs.go:274] 0 containers: []
	W0601 11:52:13.607753   27574 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:52:13.607810   27574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:52:13.639816   27574 logs.go:274] 0 containers: []
	W0601 11:52:13.639835   27574 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:52:13.639901   27574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:52:13.673913   27574 logs.go:274] 0 containers: []
	W0601 11:52:13.673926   27574 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:52:13.673986   27574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:52:13.711697   27574 logs.go:274] 0 containers: []
	W0601 11:52:13.711712   27574 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:52:13.711793   27574 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:52:13.746279   27574 logs.go:274] 0 containers: []
	W0601 11:52:13.746293   27574 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:52:13.746300   27574 logs.go:123] Gathering logs for kubelet ...
	I0601 11:52:13.746307   27574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:52:13.791086   27574 logs.go:123] Gathering logs for dmesg ...
	I0601 11:52:13.791104   27574 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:52:13.807020   27574 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:52:13.807036   27574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:52:13.878010   27574 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:52:13.878022   27574 logs.go:123] Gathering logs for Docker ...
	I0601 11:52:13.878030   27574 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:52:13.896985   27574 logs.go:123] Gathering logs for container status ...
	I0601 11:52:13.897002   27574 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:52:15.965736   27574 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.068735414s)
	W0601 11:52:15.965853   27574 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0601 11:52:15.965868   27574 out.go:239] * 
	* 
	W0601 11:52:15.965983   27574 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0601 11:52:15.965997   27574 out.go:239] * 
	* 
	W0601 11:52:15.966566   27574 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:52:16.052245   27574 out.go:177] 
	W0601 11:52:16.073390   27574 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0601 11:52:16.073464   27574 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0601 11:52:16.073507   27574 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0601 11:52:16.115452   27574 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:190: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-20220601114806-16804 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220601114806-16804
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220601114806-16804:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273",
	        "Created": "2022-06-01T18:48:12.461821519Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 193150,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T18:48:12.764310104Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273/hostname",
	        "HostsPath": "/var/lib/docker/containers/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273/hosts",
	        "LogPath": "/var/lib/docker/containers/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273-json.log",
	        "Name": "/old-k8s-version-20220601114806-16804",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220601114806-16804:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220601114806-16804",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/34025968d17a5ea4a956d84b5a5a083525af3a67c56680691bf072548c5ecfc2-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/34025968d17a5ea4a956d84b5a5a083525af3a67c56680691bf072548c5ecfc2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/34025968d17a5ea4a956d84b5a5a083525af3a67c56680691bf072548c5ecfc2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/34025968d17a5ea4a956d84b5a5a083525af3a67c56680691bf072548c5ecfc2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220601114806-16804",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220601114806-16804/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220601114806-16804",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220601114806-16804",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220601114806-16804",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "72175ab90dee0fbf5b35e66b92e3c1f135a8f2266454ef881118c146bca502ba",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58779"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58780"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58781"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58782"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58783"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/72175ab90dee",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220601114806-16804": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ff69f8f777d8",
	                        "old-k8s-version-20220601114806-16804"
	                    ],
	                    "NetworkID": "246cf6a028e4e11a14e92d87f31441d673c4de3a42936ed926f0c32bee110562",
	                    "EndpointID": "6a578a3198b73e45e42a4afd315046a792ebb2ec94118f970e944b211f341923",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220601114806-16804 -n old-k8s-version-20220601114806-16804
E0601 11:52:16.494655   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601113006-16804/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220601114806-16804 -n old-k8s-version-20220601114806-16804: exit status 6 (469.91451ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:52:16.824579   28212 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220601114806-16804" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220601114806-16804" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (250.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context old-k8s-version-20220601114806-16804 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220601114806-16804 create -f testdata/busybox.yaml: exit status 1 (30.785024ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220601114806-16804" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:198: kubectl --context old-k8s-version-20220601114806-16804 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220601114806-16804
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220601114806-16804:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273",
	        "Created": "2022-06-01T18:48:12.461821519Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 193150,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T18:48:12.764310104Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273/hostname",
	        "HostsPath": "/var/lib/docker/containers/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273/hosts",
	        "LogPath": "/var/lib/docker/containers/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273-json.log",
	        "Name": "/old-k8s-version-20220601114806-16804",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220601114806-16804:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220601114806-16804",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/34025968d17a5ea4a956d84b5a5a083525af3a67c56680691bf072548c5ecfc2-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/34025968d17a5ea4a956d84b5a5a083525af3a67c56680691bf072548c5ecfc2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/34025968d17a5ea4a956d84b5a5a083525af3a67c56680691bf072548c5ecfc2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/34025968d17a5ea4a956d84b5a5a083525af3a67c56680691bf072548c5ecfc2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220601114806-16804",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220601114806-16804/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220601114806-16804",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220601114806-16804",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220601114806-16804",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "72175ab90dee0fbf5b35e66b92e3c1f135a8f2266454ef881118c146bca502ba",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58779"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58780"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58781"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58782"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58783"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/72175ab90dee",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220601114806-16804": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ff69f8f777d8",
	                        "old-k8s-version-20220601114806-16804"
	                    ],
	                    "NetworkID": "246cf6a028e4e11a14e92d87f31441d673c4de3a42936ed926f0c32bee110562",
	                    "EndpointID": "6a578a3198b73e45e42a4afd315046a792ebb2ec94118f970e944b211f341923",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220601114806-16804 -n old-k8s-version-20220601114806-16804
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220601114806-16804 -n old-k8s-version-20220601114806-16804: exit status 6 (502.537445ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:52:17.438744   28225 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220601114806-16804" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220601114806-16804" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220601114806-16804
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220601114806-16804:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273",
	        "Created": "2022-06-01T18:48:12.461821519Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 193150,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T18:48:12.764310104Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273/hostname",
	        "HostsPath": "/var/lib/docker/containers/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273/hosts",
	        "LogPath": "/var/lib/docker/containers/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273-json.log",
	        "Name": "/old-k8s-version-20220601114806-16804",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220601114806-16804:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220601114806-16804",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/34025968d17a5ea4a956d84b5a5a083525af3a67c56680691bf072548c5ecfc2-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/34025968d17a5ea4a956d84b5a5a083525af3a67c56680691bf072548c5ecfc2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/34025968d17a5ea4a956d84b5a5a083525af3a67c56680691bf072548c5ecfc2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/34025968d17a5ea4a956d84b5a5a083525af3a67c56680691bf072548c5ecfc2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220601114806-16804",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220601114806-16804/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220601114806-16804",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220601114806-16804",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220601114806-16804",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "72175ab90dee0fbf5b35e66b92e3c1f135a8f2266454ef881118c146bca502ba",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58779"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58780"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58781"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58782"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58783"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/72175ab90dee",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220601114806-16804": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ff69f8f777d8",
	                        "old-k8s-version-20220601114806-16804"
	                    ],
	                    "NetworkID": "246cf6a028e4e11a14e92d87f31441d673c4de3a42936ed926f0c32bee110562",
	                    "EndpointID": "6a578a3198b73e45e42a4afd315046a792ebb2ec94118f970e944b211f341923",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220601114806-16804 -n old-k8s-version-20220601114806-16804
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220601114806-16804 -n old-k8s-version-20220601114806-16804: exit status 6 (482.716228ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:52:18.024055   28241 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220601114806-16804" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220601114806-16804" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (1.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220601114806-16804 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0601 11:52:35.922830   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601113005-16804/client.crt: no such file or directory
E0601 11:52:44.023786   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601113004-16804/client.crt: no such file or directory
E0601 11:52:54.134108   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601113004-16804/client.crt: no such file or directory
E0601 11:52:54.140602   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601113004-16804/client.crt: no such file or directory
E0601 11:52:54.151939   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601113004-16804/client.crt: no such file or directory
E0601 11:52:54.172286   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601113004-16804/client.crt: no such file or directory
E0601 11:52:54.213406   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601113004-16804/client.crt: no such file or directory
E0601 11:52:54.293613   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601113004-16804/client.crt: no such file or directory
E0601 11:52:54.453898   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601113004-16804/client.crt: no such file or directory
E0601 11:52:54.775556   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601113004-16804/client.crt: no such file or directory
E0601 11:52:55.416468   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601113004-16804/client.crt: no such file or directory
E0601 11:52:56.696626   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601113004-16804/client.crt: no such file or directory
E0601 11:52:59.256809   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601113004-16804/client.crt: no such file or directory
E0601 11:53:03.740448   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601113006-16804/client.crt: no such file or directory
E0601 11:53:04.377366   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601113004-16804/client.crt: no such file or directory
E0601 11:53:04.573492   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601113004-16804/client.crt: no such file or directory
E0601 11:53:14.588616   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
E0601 11:53:14.619450   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601113004-16804/client.crt: no such file or directory
E0601 11:53:31.490590   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601113006-16804/client.crt: no such file or directory
E0601 11:53:35.099974   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601113004-16804/client.crt: no such file or directory
start_stop_delete_test.go:207: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220601114806-16804 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m29.227599883s)

                                                
                                                
-- stdout --
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:209: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220601114806-16804 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context old-k8s-version-20220601114806-16804 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:217: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220601114806-16804 describe deploy/metrics-server -n kube-system: exit status 1 (30.664243ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220601114806-16804" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:219: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-20220601114806-16804 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:223: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220601114806-16804
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220601114806-16804:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273",
	        "Created": "2022-06-01T18:48:12.461821519Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 193150,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T18:48:12.764310104Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273/hostname",
	        "HostsPath": "/var/lib/docker/containers/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273/hosts",
	        "LogPath": "/var/lib/docker/containers/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273-json.log",
	        "Name": "/old-k8s-version-20220601114806-16804",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220601114806-16804:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220601114806-16804",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/34025968d17a5ea4a956d84b5a5a083525af3a67c56680691bf072548c5ecfc2-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/34025968d17a5ea4a956d84b5a5a083525af3a67c56680691bf072548c5ecfc2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/34025968d17a5ea4a956d84b5a5a083525af3a67c56680691bf072548c5ecfc2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/34025968d17a5ea4a956d84b5a5a083525af3a67c56680691bf072548c5ecfc2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220601114806-16804",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220601114806-16804/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220601114806-16804",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220601114806-16804",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220601114806-16804",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "72175ab90dee0fbf5b35e66b92e3c1f135a8f2266454ef881118c146bca502ba",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58779"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58780"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58781"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58782"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58783"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/72175ab90dee",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220601114806-16804": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ff69f8f777d8",
	                        "old-k8s-version-20220601114806-16804"
	                    ],
	                    "NetworkID": "246cf6a028e4e11a14e92d87f31441d673c4de3a42936ed926f0c32bee110562",
	                    "EndpointID": "6a578a3198b73e45e42a4afd315046a792ebb2ec94118f970e944b211f341923",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220601114806-16804 -n old-k8s-version-20220601114806-16804
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220601114806-16804 -n old-k8s-version-20220601114806-16804: exit status 6 (453.521437ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:53:47.811587   28279 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220601114806-16804" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220601114806-16804" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (490.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20220601114806-16804 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0601 11:53:50.934133   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory
E0601 11:53:57.842550   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601113005-16804/client.crt: no such file or directory
E0601 11:54:05.945507   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601113004-16804/client.crt: no such file or directory
E0601 11:54:07.876752   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory
E0601 11:54:16.060995   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601113004-16804/client.crt: no such file or directory
E0601 11:54:32.638576   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601113006-16804/client.crt: no such file or directory
E0601 11:54:38.188470   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601112852-16804/client.crt: no such file or directory
E0601 11:55:00.334804   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601113006-16804/client.crt: no such file or directory
E0601 11:55:20.698702   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601113004-16804/client.crt: no such file or directory
E0601 11:55:37.980541   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601113004-16804/client.crt: no such file or directory
E0601 11:55:44.418827   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601113004-16804/client.crt: no such file or directory
E0601 11:55:44.425051   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601113004-16804/client.crt: no such file or directory
E0601 11:55:44.437127   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601113004-16804/client.crt: no such file or directory
E0601 11:55:44.457312   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601113004-16804/client.crt: no such file or directory
E0601 11:55:44.497996   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601113004-16804/client.crt: no such file or directory
E0601 11:55:44.579847   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601113004-16804/client.crt: no such file or directory
E0601 11:55:44.742104   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601113004-16804/client.crt: no such file or directory
E0601 11:55:45.064328   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601113004-16804/client.crt: no such file or directory
E0601 11:55:45.704913   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601113004-16804/client.crt: no such file or directory
E0601 11:55:46.986727   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601113004-16804/client.crt: no such file or directory
E0601 11:55:48.412637   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601113004-16804/client.crt: no such file or directory
E0601 11:55:49.547034   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601113004-16804/client.crt: no such file or directory
E0601 11:55:54.668768   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601113004-16804/client.crt: no such file or directory
E0601 11:56:04.908916   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601113004-16804/client.crt: no such file or directory
E0601 11:56:14.002070   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601113005-16804/client.crt: no such file or directory
E0601 11:56:22.094922   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601113004-16804/client.crt: no such file or directory
E0601 11:56:23.702584   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601113005-16804/client.crt: no such file or directory
E0601 11:56:25.389510   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601113004-16804/client.crt: no such file or directory
E0601 11:56:41.681969   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601113005-16804/client.crt: no such file or directory
E0601 11:56:49.786440   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601113004-16804/client.crt: no such file or directory
E0601 11:57:06.349738   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601113004-16804/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-20220601114806-16804 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m6.024229809s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220601114806-16804] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	* Kubernetes 1.23.6 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.6
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-20220601114806-16804 in cluster old-k8s-version-20220601114806-16804
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-20220601114806-16804" ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:53:49.869744   28319 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:53:49.870058   28319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:53:49.870063   28319 out.go:309] Setting ErrFile to fd 2...
	I0601 11:53:49.870067   28319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:53:49.870200   28319 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:53:49.870479   28319 out.go:303] Setting JSON to false
	I0601 11:53:49.885748   28319 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":8599,"bootTime":1654101030,"procs":364,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 11:53:49.885855   28319 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:53:49.907511   28319 out.go:177] * [old-k8s-version-20220601114806-16804] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 11:53:49.929263   28319 notify.go:193] Checking for updates...
	I0601 11:53:49.950161   28319 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:53:49.972303   28319 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:53:49.993555   28319 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 11:53:50.019203   28319 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:53:50.040605   28319 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:53:50.063270   28319 config.go:178] Loaded profile config "old-k8s-version-20220601114806-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0601 11:53:50.085267   28319 out.go:177] * Kubernetes 1.23.6 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.6
	I0601 11:53:50.106145   28319 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:53:50.179855   28319 docker.go:137] docker version: linux-20.10.14
	I0601 11:53:50.179965   28319 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:53:50.309210   28319 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 18:53:50.252610494 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:53:50.352718   28319 out.go:177] * Using the docker driver based on existing profile
	I0601 11:53:50.373802   28319 start.go:284] selected driver: docker
	I0601 11:53:50.373850   28319 start.go:806] validating driver "docker" against &{Name:old-k8s-version-20220601114806-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601114806-16804 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: M
ultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:53:50.374023   28319 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:53:50.377412   28319 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:53:50.505237   28319 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 18:53:50.450324655 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:53:50.505421   28319 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:53:50.505439   28319 cni.go:95] Creating CNI manager for ""
	I0601 11:53:50.505447   28319 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:53:50.505454   28319 start_flags.go:306] config:
	{Name:old-k8s-version-20220601114806-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601114806-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:53:50.527438   28319 out.go:177] * Starting control plane node old-k8s-version-20220601114806-16804 in cluster old-k8s-version-20220601114806-16804
	I0601 11:53:50.548947   28319 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:53:50.570241   28319 out.go:177] * Pulling base image ...
	I0601 11:53:50.613151   28319 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 11:53:50.613177   28319 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:53:50.613243   28319 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0601 11:53:50.613268   28319 cache.go:57] Caching tarball of preloaded images
	I0601 11:53:50.613461   28319 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:53:50.613486   28319 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0601 11:53:50.614580   28319 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/config.json ...
	I0601 11:53:50.680684   28319 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 11:53:50.680699   28319 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 11:53:50.680708   28319 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:53:50.680756   28319 start.go:352] acquiring machines lock for old-k8s-version-20220601114806-16804: {Name:mke97f71f3781c3324662a5c4576dc1a6ff166e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:53:50.680837   28319 start.go:356] acquired machines lock for "old-k8s-version-20220601114806-16804" in 61.411µs
	I0601 11:53:50.680855   28319 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:53:50.680865   28319 fix.go:55] fixHost starting: 
	I0601 11:53:50.681120   28319 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601114806-16804 --format={{.State.Status}}
	I0601 11:53:50.749601   28319 fix.go:103] recreateIfNeeded on old-k8s-version-20220601114806-16804: state=Stopped err=<nil>
	W0601 11:53:50.749634   28319 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 11:53:50.771624   28319 out.go:177] * Restarting existing docker container for "old-k8s-version-20220601114806-16804" ...
	I0601 11:53:50.793636   28319 cli_runner.go:164] Run: docker start old-k8s-version-20220601114806-16804
	I0601 11:53:51.159654   28319 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601114806-16804 --format={{.State.Status}}
	I0601 11:53:51.244535   28319 kic.go:416] container "old-k8s-version-20220601114806-16804" state is running.
	I0601 11:53:51.245201   28319 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220601114806-16804
	I0601 11:53:51.377956   28319 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/config.json ...
	I0601 11:53:51.378362   28319 machine.go:88] provisioning docker machine ...
	I0601 11:53:51.378386   28319 ubuntu.go:169] provisioning hostname "old-k8s-version-20220601114806-16804"
	I0601 11:53:51.378453   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:51.457140   28319 main.go:134] libmachine: Using SSH client type: native
	I0601 11:53:51.457343   28319 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 59947 <nil> <nil>}
	I0601 11:53:51.457358   28319 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220601114806-16804 && echo "old-k8s-version-20220601114806-16804" | sudo tee /etc/hostname
	I0601 11:53:51.580646   28319 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220601114806-16804
	
	I0601 11:53:51.580749   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:51.656628   28319 main.go:134] libmachine: Using SSH client type: native
	I0601 11:53:51.656782   28319 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 59947 <nil> <nil>}
	I0601 11:53:51.656796   28319 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220601114806-16804' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220601114806-16804/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220601114806-16804' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 11:53:51.776288   28319 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:53:51.776311   28319 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 11:53:51.776328   28319 ubuntu.go:177] setting up certificates
	I0601 11:53:51.776340   28319 provision.go:83] configureAuth start
	I0601 11:53:51.776419   28319 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220601114806-16804
	I0601 11:53:51.850151   28319 provision.go:138] copyHostCerts
	I0601 11:53:51.850269   28319 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 11:53:51.850278   28319 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 11:53:51.850366   28319 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 11:53:51.850623   28319 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 11:53:51.850633   28319 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 11:53:51.850695   28319 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 11:53:51.850828   28319 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 11:53:51.850834   28319 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 11:53:51.850894   28319 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1675 bytes)
	I0601 11:53:51.851013   28319 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220601114806-16804 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220601114806-16804]
	I0601 11:53:51.901708   28319 provision.go:172] copyRemoteCerts
	I0601 11:53:51.901767   28319 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 11:53:51.901818   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:51.975877   28319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59947 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601114806-16804/id_rsa Username:docker}
	I0601 11:53:52.060009   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0601 11:53:52.077110   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 11:53:52.093871   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0601 11:53:52.110974   28319 provision.go:86] duration metric: configureAuth took 334.623818ms
	I0601 11:53:52.110987   28319 ubuntu.go:193] setting minikube options for container-runtime
	I0601 11:53:52.111171   28319 config.go:178] Loaded profile config "old-k8s-version-20220601114806-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0601 11:53:52.111232   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:52.184299   28319 main.go:134] libmachine: Using SSH client type: native
	I0601 11:53:52.184438   28319 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 59947 <nil> <nil>}
	I0601 11:53:52.184448   28319 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 11:53:52.302847   28319 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 11:53:52.302863   28319 ubuntu.go:71] root file system type: overlay
	I0601 11:53:52.303018   28319 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 11:53:52.303102   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:52.376389   28319 main.go:134] libmachine: Using SSH client type: native
	I0601 11:53:52.376552   28319 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 59947 <nil> <nil>}
	I0601 11:53:52.376603   28319 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 11:53:52.502277   28319 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 11:53:52.502373   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:52.575586   28319 main.go:134] libmachine: Using SSH client type: native
	I0601 11:53:52.575726   28319 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 59947 <nil> <nil>}
	I0601 11:53:52.575739   28319 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 11:53:52.696095   28319 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:53:52.696111   28319 machine.go:91] provisioned docker machine in 1.317750791s
	I0601 11:53:52.696121   28319 start.go:306] post-start starting for "old-k8s-version-20220601114806-16804" (driver="docker")
	I0601 11:53:52.696125   28319 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 11:53:52.696189   28319 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 11:53:52.696241   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:52.769932   28319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59947 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601114806-16804/id_rsa Username:docker}
	I0601 11:53:52.855461   28319 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 11:53:52.859028   28319 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 11:53:52.859043   28319 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 11:53:52.859052   28319 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 11:53:52.859056   28319 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 11:53:52.859064   28319 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 11:53:52.859169   28319 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 11:53:52.859314   28319 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem -> 168042.pem in /etc/ssl/certs
	I0601 11:53:52.859492   28319 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 11:53:52.866875   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /etc/ssl/certs/168042.pem (1708 bytes)
	I0601 11:53:52.884313   28319 start.go:309] post-start completed in 188.184945ms
	I0601 11:53:52.884426   28319 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:53:52.884507   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:52.959492   28319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59947 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601114806-16804/id_rsa Username:docker}
	I0601 11:53:53.043087   28319 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:53:53.047543   28319 fix.go:57] fixHost completed within 2.366693794s
	I0601 11:53:53.047555   28319 start.go:81] releasing machines lock for "old-k8s-version-20220601114806-16804", held for 2.366727273s
	I0601 11:53:53.047629   28319 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220601114806-16804
	I0601 11:53:53.121099   28319 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 11:53:53.121221   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:53.121364   28319 ssh_runner.go:195] Run: systemctl --version
	I0601 11:53:53.121966   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:53.202586   28319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59947 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601114806-16804/id_rsa Username:docker}
	I0601 11:53:53.205983   28319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59947 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601114806-16804/id_rsa Username:docker}
	I0601 11:53:53.287975   28319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 11:53:53.422168   28319 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 11:53:53.432821   28319 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 11:53:53.432877   28319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 11:53:53.443234   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 11:53:53.456386   28319 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 11:53:53.525203   28319 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 11:53:53.595305   28319 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 11:53:53.605613   28319 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 11:53:53.677054   28319 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 11:53:53.687222   28319 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 11:53:53.721998   28319 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 11:53:53.799095   28319 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.16 ...
	I0601 11:53:53.799216   28319 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220601114806-16804 dig +short host.docker.internal
	I0601 11:53:53.940925   28319 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 11:53:53.941045   28319 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 11:53:53.945523   28319 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:53:53.955094   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:54.028140   28319 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 11:53:54.028206   28319 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 11:53:54.058427   28319 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0601 11:53:54.058444   28319 docker.go:541] Images already preloaded, skipping extraction
	I0601 11:53:54.058545   28319 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 11:53:54.088697   28319 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0601 11:53:54.088719   28319 cache_images.go:84] Images are preloaded, skipping loading
	I0601 11:53:54.088807   28319 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 11:53:54.166463   28319 cni.go:95] Creating CNI manager for ""
	I0601 11:53:54.166476   28319 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:53:54.166488   28319 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 11:53:54.166502   28319 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220601114806-16804 NodeName:old-k8s-version-20220601114806-16804 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 11:53:54.166740   28319 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220601114806-16804"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220601114806-16804
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.49.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 11:53:54.166870   28319 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220601114806-16804 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601114806-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 11:53:54.166970   28319 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0601 11:53:54.175057   28319 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 11:53:54.175168   28319 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 11:53:54.182581   28319 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0601 11:53:54.195344   28319 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 11:53:54.209271   28319 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0601 11:53:54.222455   28319 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 11:53:54.226242   28319 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:53:54.235793   28319 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804 for IP: 192.168.49.2
	I0601 11:53:54.236026   28319 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 11:53:54.236076   28319 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 11:53:54.236166   28319 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/client.key
	I0601 11:53:54.236237   28319 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/apiserver.key.dd3b5fb2
	I0601 11:53:54.236290   28319 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/proxy-client.key
	I0601 11:53:54.236516   28319 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem (1338 bytes)
	W0601 11:53:54.236567   28319 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804_empty.pem, impossibly tiny 0 bytes
	I0601 11:53:54.236582   28319 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1675 bytes)
	I0601 11:53:54.236627   28319 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 11:53:54.236663   28319 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 11:53:54.236693   28319 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1675 bytes)
	I0601 11:53:54.236758   28319 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem (1708 bytes)
	I0601 11:53:54.237319   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 11:53:54.255312   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0601 11:53:54.273877   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 11:53:54.292370   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 11:53:54.309832   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 11:53:54.326977   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 11:53:54.344196   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 11:53:54.362336   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0601 11:53:54.379964   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /usr/share/ca-certificates/168042.pem (1708 bytes)
	I0601 11:53:54.397530   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 11:53:54.417711   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem --> /usr/share/ca-certificates/16804.pem (1338 bytes)
	I0601 11:53:54.437491   28319 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 11:53:54.450542   28319 ssh_runner.go:195] Run: openssl version
	I0601 11:53:54.456042   28319 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 11:53:54.464269   28319 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:53:54.468369   28319 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:53:54.468417   28319 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:53:54.473721   28319 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 11:53:54.481064   28319 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16804.pem && ln -fs /usr/share/ca-certificates/16804.pem /etc/ssl/certs/16804.pem"
	I0601 11:53:54.489014   28319 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16804.pem
	I0601 11:53:54.493352   28319 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 18:01 /usr/share/ca-certificates/16804.pem
	I0601 11:53:54.493405   28319 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16804.pem
	I0601 11:53:54.498751   28319 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16804.pem /etc/ssl/certs/51391683.0"
	I0601 11:53:54.506172   28319 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168042.pem && ln -fs /usr/share/ca-certificates/168042.pem /etc/ssl/certs/168042.pem"
	I0601 11:53:54.514267   28319 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168042.pem
	I0601 11:53:54.518553   28319 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 18:01 /usr/share/ca-certificates/168042.pem
	I0601 11:53:54.518598   28319 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168042.pem
	I0601 11:53:54.523963   28319 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168042.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 11:53:54.531759   28319 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220601114806-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601114806-16804 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fa
lse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:53:54.531914   28319 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 11:53:54.560485   28319 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 11:53:54.568453   28319 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 11:53:54.568470   28319 kubeadm.go:626] restartCluster start
	I0601 11:53:54.568526   28319 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 11:53:54.576181   28319 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:54.576234   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:54.648876   28319 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220601114806-16804" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:53:54.649065   28319 kubeconfig.go:127] "old-k8s-version-20220601114806-16804" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 11:53:54.649419   28319 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk924f4ba24fa74a0cb052299e0cc4e825b209a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:53:54.650792   28319 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 11:53:54.658693   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:54.658754   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:54.667668   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:54.867864   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:54.868016   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:54.878565   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:55.067861   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:55.068061   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:55.078749   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:55.267872   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:55.267970   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:55.277798   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:55.467808   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:55.468001   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:55.478316   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:55.668830   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:55.668990   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:55.679581   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:55.867820   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:55.867886   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:55.877012   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:56.067800   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:56.067905   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:56.078888   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:56.268000   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:56.268155   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:56.280256   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:56.469870   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:56.470054   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:56.480670   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:56.668044   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:56.668248   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:56.678758   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:56.869784   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:56.870011   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:56.881309   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:57.068003   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:57.068108   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:57.078632   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:57.268862   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:57.269009   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:57.279785   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:57.467744   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:57.467859   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:57.476668   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:57.669778   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:57.669940   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:57.680383   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:57.680392   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:57.680428   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:57.688734   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:57.688748   28319 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 11:53:57.688756   28319 kubeadm.go:1092] stopping kube-system containers ...
	I0601 11:53:57.688806   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 11:53:57.716946   28319 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 11:53:57.727312   28319 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:53:57.734908   28319 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5747 Jun  1 18:50 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5783 Jun  1 18:50 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5931 Jun  1 18:50 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5735 Jun  1 18:50 /etc/kubernetes/scheduler.conf
	
	I0601 11:53:57.734963   28319 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0601 11:53:57.742318   28319 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0601 11:53:57.749223   28319 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0601 11:53:57.756324   28319 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0601 11:53:57.763812   28319 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:53:57.771443   28319 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 11:53:57.771471   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:53:57.824342   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:53:58.674608   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:53:58.883641   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:53:58.947348   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:53:59.001013   28319 api_server.go:51] waiting for apiserver process to appear ...
	I0601 11:53:59.001108   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:53:59.510767   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:00.009647   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:00.509747   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:01.010150   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:01.509684   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:02.010421   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:02.509629   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:03.010849   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:03.509597   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:04.010617   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:04.509864   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:05.009626   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:05.510122   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:06.011243   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:06.509597   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:07.010075   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:07.510735   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:08.009752   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:08.510521   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:09.011821   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:09.509668   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:10.009948   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:10.510847   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:11.009616   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:11.511800   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:12.011078   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:12.509781   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:13.010426   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:13.511504   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:14.009773   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:14.511892   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:15.009733   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:15.509887   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:16.009785   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:16.509980   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:17.010719   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:17.510131   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:18.010694   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:18.509925   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:19.009913   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:19.509819   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:20.010244   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:20.511718   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:21.009981   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:21.511674   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:22.010072   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:22.510782   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:23.010358   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:23.510119   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:24.010784   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:24.510053   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:25.010176   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:25.509875   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:26.010334   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:26.509928   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:27.011901   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:27.510111   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:28.010803   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:28.511923   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:29.009812   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:29.510817   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:30.009917   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:30.509902   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:31.009955   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:31.510015   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:32.009897   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:32.511855   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:33.010119   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:33.509814   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:34.009927   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:34.510142   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:35.009839   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:35.510508   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:36.011637   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:36.510196   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:37.011880   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:37.510089   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:38.009692   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:38.511810   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:39.011487   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:39.510121   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:40.009747   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:40.510936   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:41.009982   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:41.511810   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:42.009813   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:42.509738   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:43.009671   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:43.510070   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:44.010000   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:44.510019   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:45.011452   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:45.510016   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:46.011805   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:46.511096   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:47.010260   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:47.511556   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:48.011623   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:48.510043   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:49.010213   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:49.511714   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:50.010714   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:50.510086   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:51.010435   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:51.509903   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:52.011713   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:52.511717   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:53.010672   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:53.510554   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:54.011736   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:54.510455   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:55.009677   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:55.511743   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:56.010494   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:56.510375   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:57.009595   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:57.510546   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:58.009763   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:58.510692   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:59.010031   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:54:59.041359   28319 logs.go:274] 0 containers: []
	W0601 11:54:59.041374   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:54:59.041433   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:54:59.070260   28319 logs.go:274] 0 containers: []
	W0601 11:54:59.070272   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:54:59.070335   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:54:59.100026   28319 logs.go:274] 0 containers: []
	W0601 11:54:59.100038   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:54:59.100092   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:54:59.130410   28319 logs.go:274] 0 containers: []
	W0601 11:54:59.130422   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:54:59.130489   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:54:59.161102   28319 logs.go:274] 0 containers: []
	W0601 11:54:59.161116   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:54:59.161174   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:54:59.190924   28319 logs.go:274] 0 containers: []
	W0601 11:54:59.190935   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:54:59.190999   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:54:59.220657   28319 logs.go:274] 0 containers: []
	W0601 11:54:59.220668   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:54:59.220727   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:54:59.249159   28319 logs.go:274] 0 containers: []
	W0601 11:54:59.249172   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:54:59.249178   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:54:59.249185   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:54:59.261384   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:54:59.261396   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:54:59.314775   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:54:59.314790   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:54:59.314813   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:54:59.327098   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:54:59.327111   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:01.380143   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053042018s)
	I0601 11:55:01.380273   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:01.380280   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:03.922143   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:04.010905   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:04.040989   28319 logs.go:274] 0 containers: []
	W0601 11:55:04.041000   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:04.041053   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:04.068936   28319 logs.go:274] 0 containers: []
	W0601 11:55:04.068948   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:04.069005   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:04.097959   28319 logs.go:274] 0 containers: []
	W0601 11:55:04.097971   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:04.098033   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:04.126721   28319 logs.go:274] 0 containers: []
	W0601 11:55:04.126734   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:04.126798   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:04.159225   28319 logs.go:274] 0 containers: []
	W0601 11:55:04.159236   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:04.159294   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:04.190775   28319 logs.go:274] 0 containers: []
	W0601 11:55:04.190816   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:04.190876   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:04.221251   28319 logs.go:274] 0 containers: []
	W0601 11:55:04.221264   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:04.221323   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:04.252908   28319 logs.go:274] 0 containers: []
	W0601 11:55:04.252955   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:04.252962   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:04.252973   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:04.295721   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:55:04.295735   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:55:04.307860   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:55:04.307873   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:55:04.362481   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:55:04.362494   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:55:04.362502   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:55:04.374612   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:04.374623   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:06.432720   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058108483s)
	I0601 11:55:08.935099   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:09.011533   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:09.042307   28319 logs.go:274] 0 containers: []
	W0601 11:55:09.042320   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:09.042373   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:09.071674   28319 logs.go:274] 0 containers: []
	W0601 11:55:09.071686   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:09.071752   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:09.100500   28319 logs.go:274] 0 containers: []
	W0601 11:55:09.100516   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:09.100572   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:09.129557   28319 logs.go:274] 0 containers: []
	W0601 11:55:09.129568   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:09.129632   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:09.159131   28319 logs.go:274] 0 containers: []
	W0601 11:55:09.159144   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:09.159198   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:09.188211   28319 logs.go:274] 0 containers: []
	W0601 11:55:09.188224   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:09.188282   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:09.218887   28319 logs.go:274] 0 containers: []
	W0601 11:55:09.218900   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:09.218955   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:09.248189   28319 logs.go:274] 0 containers: []
	W0601 11:55:09.248204   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:09.248212   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:09.248220   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:09.292398   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:55:09.292412   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:55:09.305043   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:55:09.305056   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:55:09.358584   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:55:09.358623   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:55:09.358646   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:55:09.371613   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:09.371625   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:11.427594   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055980689s)
	I0601 11:55:13.928572   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:14.011456   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:14.041396   28319 logs.go:274] 0 containers: []
	W0601 11:55:14.041409   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:14.041466   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:14.069221   28319 logs.go:274] 0 containers: []
	W0601 11:55:14.069233   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:14.069300   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:14.098018   28319 logs.go:274] 0 containers: []
	W0601 11:55:14.098031   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:14.098087   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:14.128468   28319 logs.go:274] 0 containers: []
	W0601 11:55:14.128480   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:14.128538   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:14.162047   28319 logs.go:274] 0 containers: []
	W0601 11:55:14.162059   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:14.162114   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:14.195633   28319 logs.go:274] 0 containers: []
	W0601 11:55:14.195647   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:14.195716   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:14.224730   28319 logs.go:274] 0 containers: []
	W0601 11:55:14.224743   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:14.224796   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:14.255413   28319 logs.go:274] 0 containers: []
	W0601 11:55:14.255426   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:14.255449   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:14.255456   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:14.297925   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:55:14.297938   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:55:14.311464   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:55:14.311477   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:55:14.363749   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:55:14.363759   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:55:14.363766   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:55:14.377049   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:14.377063   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:16.431836   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054784141s)
	I0601 11:55:18.932093   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:19.009576   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:19.039961   28319 logs.go:274] 0 containers: []
	W0601 11:55:19.039974   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:19.040032   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:19.069166   28319 logs.go:274] 0 containers: []
	W0601 11:55:19.069178   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:19.069234   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:19.097392   28319 logs.go:274] 0 containers: []
	W0601 11:55:19.097405   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:19.097468   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:19.128648   28319 logs.go:274] 0 containers: []
	W0601 11:55:19.128660   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:19.128716   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:19.158222   28319 logs.go:274] 0 containers: []
	W0601 11:55:19.158235   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:19.158294   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:19.188141   28319 logs.go:274] 0 containers: []
	W0601 11:55:19.188155   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:19.188209   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:19.219575   28319 logs.go:274] 0 containers: []
	W0601 11:55:19.219588   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:19.219654   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:19.253005   28319 logs.go:274] 0 containers: []
	W0601 11:55:19.253019   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:19.253026   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:55:19.253035   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:55:19.266133   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:19.266149   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:21.320131   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053993397s)
	I0601 11:55:21.320234   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:21.320240   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:21.361727   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:55:21.361740   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:55:21.375163   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:55:21.375177   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:55:21.432802   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:55:23.934258   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:24.009921   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:24.040408   28319 logs.go:274] 0 containers: []
	W0601 11:55:24.040420   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:24.040476   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:24.068603   28319 logs.go:274] 0 containers: []
	W0601 11:55:24.068615   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:24.068673   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:24.097572   28319 logs.go:274] 0 containers: []
	W0601 11:55:24.097584   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:24.097641   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:24.127008   28319 logs.go:274] 0 containers: []
	W0601 11:55:24.127020   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:24.127083   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:24.157041   28319 logs.go:274] 0 containers: []
	W0601 11:55:24.157054   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:24.157117   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:24.186748   28319 logs.go:274] 0 containers: []
	W0601 11:55:24.186761   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:24.186819   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:24.215933   28319 logs.go:274] 0 containers: []
	W0601 11:55:24.215946   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:24.216013   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:24.247816   28319 logs.go:274] 0 containers: []
	W0601 11:55:24.247829   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:24.247836   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:55:24.247843   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:55:24.260281   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:24.260293   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:26.315423   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055142929s)
	I0601 11:55:26.315530   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:26.315537   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:26.354821   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:55:26.354835   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:55:26.369903   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:55:26.369926   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:55:26.426327   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:55:28.926931   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:29.009389   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:29.040058   28319 logs.go:274] 0 containers: []
	W0601 11:55:29.040071   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:29.040129   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:29.068341   28319 logs.go:274] 0 containers: []
	W0601 11:55:29.068353   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:29.068410   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:29.098806   28319 logs.go:274] 0 containers: []
	W0601 11:55:29.098817   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:29.098876   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:29.128428   28319 logs.go:274] 0 containers: []
	W0601 11:55:29.128462   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:29.128520   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:29.158686   28319 logs.go:274] 0 containers: []
	W0601 11:55:29.158725   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:29.158785   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:29.188284   28319 logs.go:274] 0 containers: []
	W0601 11:55:29.188295   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:29.188348   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:29.217778   28319 logs.go:274] 0 containers: []
	W0601 11:55:29.217791   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:29.217855   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:29.247459   28319 logs.go:274] 0 containers: []
	W0601 11:55:29.247472   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:29.247479   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:29.247485   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:29.290765   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:55:29.290780   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:55:29.302626   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:55:29.302638   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:55:29.356128   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:55:29.356140   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:55:29.356147   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:55:29.369506   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:29.369522   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:31.427130   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057620396s)
	I0601 11:55:33.928625   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:34.009592   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:34.039227   28319 logs.go:274] 0 containers: []
	W0601 11:55:34.039241   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:34.039301   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:34.068316   28319 logs.go:274] 0 containers: []
	W0601 11:55:34.068329   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:34.068388   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:34.097349   28319 logs.go:274] 0 containers: []
	W0601 11:55:34.097360   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:34.097414   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:34.127402   28319 logs.go:274] 0 containers: []
	W0601 11:55:34.127415   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:34.127473   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:34.158010   28319 logs.go:274] 0 containers: []
	W0601 11:55:34.158023   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:34.158091   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:34.189587   28319 logs.go:274] 0 containers: []
	W0601 11:55:34.189604   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:34.189668   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:34.219589   28319 logs.go:274] 0 containers: []
	W0601 11:55:34.219601   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:34.219659   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:34.251097   28319 logs.go:274] 0 containers: []
	W0601 11:55:34.251111   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:34.251118   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:34.251125   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:34.294366   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:55:34.294381   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:55:34.306716   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:55:34.306749   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:55:34.365768   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:55:34.365779   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:55:34.365789   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:55:34.378842   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:34.378855   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:36.434298   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055455813s)
	I0601 11:55:38.936699   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:39.009065   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:39.038697   28319 logs.go:274] 0 containers: []
	W0601 11:55:39.038710   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:39.038765   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:39.067921   28319 logs.go:274] 0 containers: []
	W0601 11:55:39.067933   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:39.067992   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:39.098440   28319 logs.go:274] 0 containers: []
	W0601 11:55:39.098452   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:39.098516   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:39.127326   28319 logs.go:274] 0 containers: []
	W0601 11:55:39.127338   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:39.127408   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:39.156250   28319 logs.go:274] 0 containers: []
	W0601 11:55:39.156261   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:39.156319   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:39.185946   28319 logs.go:274] 0 containers: []
	W0601 11:55:39.185958   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:39.186014   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:39.215610   28319 logs.go:274] 0 containers: []
	W0601 11:55:39.215622   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:39.215687   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:39.245933   28319 logs.go:274] 0 containers: []
	W0601 11:55:39.245945   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:39.245952   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:39.245958   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:39.288218   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:55:39.288232   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:55:39.300049   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:55:39.300062   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:55:39.353082   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:55:39.353099   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:55:39.353107   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:55:39.368530   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:39.368544   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:41.423732   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055201327s)
	I0601 11:55:43.924128   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:44.009307   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:44.039683   28319 logs.go:274] 0 containers: []
	W0601 11:55:44.039695   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:44.039751   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:44.067842   28319 logs.go:274] 0 containers: []
	W0601 11:55:44.067855   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:44.067913   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:44.097345   28319 logs.go:274] 0 containers: []
	W0601 11:55:44.097361   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:44.097434   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:44.127436   28319 logs.go:274] 0 containers: []
	W0601 11:55:44.127448   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:44.127503   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:44.156091   28319 logs.go:274] 0 containers: []
	W0601 11:55:44.156109   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:44.156164   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:44.185928   28319 logs.go:274] 0 containers: []
	W0601 11:55:44.185961   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:44.186024   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:44.214767   28319 logs.go:274] 0 containers: []
	W0601 11:55:44.214779   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:44.214838   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:44.245949   28319 logs.go:274] 0 containers: []
	W0601 11:55:44.245962   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:44.245968   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:44.245975   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:44.287811   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:55:44.287825   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:55:44.300341   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:55:44.300374   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:55:44.358385   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:55:44.358412   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:55:44.358420   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:55:44.371801   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:44.371813   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:46.428143   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056342162s)
	I0601 11:55:48.928399   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:49.009439   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:49.041219   28319 logs.go:274] 0 containers: []
	W0601 11:55:49.041231   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:49.041298   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:49.070249   28319 logs.go:274] 0 containers: []
	W0601 11:55:49.070261   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:49.070314   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:49.099733   28319 logs.go:274] 0 containers: []
	W0601 11:55:49.099745   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:49.099810   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:49.129069   28319 logs.go:274] 0 containers: []
	W0601 11:55:49.129087   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:49.129156   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:49.160580   28319 logs.go:274] 0 containers: []
	W0601 11:55:49.160592   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:49.160649   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:49.191907   28319 logs.go:274] 0 containers: []
	W0601 11:55:49.191927   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:49.192017   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:49.224082   28319 logs.go:274] 0 containers: []
	W0601 11:55:49.224094   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:49.224150   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:49.253092   28319 logs.go:274] 0 containers: []
	W0601 11:55:49.253105   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:49.253112   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:49.253119   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:49.296708   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:55:49.296724   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:55:49.308993   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:55:49.309005   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:55:49.362195   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:55:49.362213   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:55:49.362221   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:55:49.375504   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:49.375515   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:51.430612   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055108694s)
	I0601 11:55:53.931474   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:54.010786   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:54.041202   28319 logs.go:274] 0 containers: []
	W0601 11:55:54.041214   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:54.041269   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:54.070844   28319 logs.go:274] 0 containers: []
	W0601 11:55:54.070858   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:54.070913   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:54.100345   28319 logs.go:274] 0 containers: []
	W0601 11:55:54.100358   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:54.100429   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:54.135095   28319 logs.go:274] 0 containers: []
	W0601 11:55:54.135108   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:54.135161   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:54.164057   28319 logs.go:274] 0 containers: []
	W0601 11:55:54.164070   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:54.164163   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:54.194214   28319 logs.go:274] 0 containers: []
	W0601 11:55:54.194226   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:54.194283   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:54.224549   28319 logs.go:274] 0 containers: []
	W0601 11:55:54.224563   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:54.224617   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:54.253713   28319 logs.go:274] 0 containers: []
	W0601 11:55:54.253725   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:54.253732   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:54.253741   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:54.296231   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:55:54.296245   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:55:54.309155   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:55:54.309170   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:55:54.367180   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:55:54.367192   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:55:54.367202   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:55:54.380905   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:54.380918   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:56.441742   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060835682s)
	I0601 11:55:58.942261   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:59.010922   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:59.041572   28319 logs.go:274] 0 containers: []
	W0601 11:55:59.041586   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:59.041646   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:59.071435   28319 logs.go:274] 0 containers: []
	W0601 11:55:59.071447   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:59.071510   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:59.102114   28319 logs.go:274] 0 containers: []
	W0601 11:55:59.102126   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:59.102180   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:59.131205   28319 logs.go:274] 0 containers: []
	W0601 11:55:59.131218   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:59.131290   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:59.161117   28319 logs.go:274] 0 containers: []
	W0601 11:55:59.161144   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:59.161199   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:59.192225   28319 logs.go:274] 0 containers: []
	W0601 11:55:59.192237   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:59.192291   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:59.222459   28319 logs.go:274] 0 containers: []
	W0601 11:55:59.222472   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:59.222526   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:59.252831   28319 logs.go:274] 0 containers: []
	W0601 11:55:59.252844   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:59.252851   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:59.252859   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:01.309035   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056190082s)
	I0601 11:56:01.309146   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:01.309153   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:01.351333   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:01.351348   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:01.363658   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:01.363670   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:01.419248   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:01.419262   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:01.419269   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:03.932268   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:04.010915   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:04.041445   28319 logs.go:274] 0 containers: []
	W0601 11:56:04.041457   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:04.041511   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:04.071011   28319 logs.go:274] 0 containers: []
	W0601 11:56:04.071024   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:04.071085   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:04.104002   28319 logs.go:274] 0 containers: []
	W0601 11:56:04.104013   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:04.104077   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:04.134006   28319 logs.go:274] 0 containers: []
	W0601 11:56:04.134019   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:04.134100   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:04.164966   28319 logs.go:274] 0 containers: []
	W0601 11:56:04.164980   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:04.165051   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:04.195574   28319 logs.go:274] 0 containers: []
	W0601 11:56:04.195585   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:04.195641   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:04.226690   28319 logs.go:274] 0 containers: []
	W0601 11:56:04.226702   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:04.226761   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:04.255356   28319 logs.go:274] 0 containers: []
	W0601 11:56:04.255369   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:04.255376   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:04.255397   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:04.299830   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:04.299845   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:04.311638   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:04.311650   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:04.366259   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:04.366299   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:04.366307   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:04.379569   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:04.379580   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:06.441255   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.061688435s)
	I0601 11:56:08.942583   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:09.010888   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:09.041505   28319 logs.go:274] 0 containers: []
	W0601 11:56:09.041516   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:09.041582   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:09.069955   28319 logs.go:274] 0 containers: []
	W0601 11:56:09.069968   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:09.070020   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:09.100291   28319 logs.go:274] 0 containers: []
	W0601 11:56:09.100302   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:09.100355   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:09.128780   28319 logs.go:274] 0 containers: []
	W0601 11:56:09.128791   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:09.128844   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:09.158028   28319 logs.go:274] 0 containers: []
	W0601 11:56:09.158040   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:09.158100   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:09.188003   28319 logs.go:274] 0 containers: []
	W0601 11:56:09.188016   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:09.188071   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:09.217250   28319 logs.go:274] 0 containers: []
	W0601 11:56:09.217263   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:09.217335   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:09.247404   28319 logs.go:274] 0 containers: []
	W0601 11:56:09.247416   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:09.247423   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:09.247430   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:09.291646   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:09.291660   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:09.303726   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:09.303737   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:09.359404   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:09.359416   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:09.359423   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:09.372338   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:09.372352   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:11.438025   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.065685968s)
	I0601 11:56:13.938356   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:14.010482   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:14.042655   28319 logs.go:274] 0 containers: []
	W0601 11:56:14.042666   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:14.042721   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:14.073307   28319 logs.go:274] 0 containers: []
	W0601 11:56:14.073335   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:14.073392   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:14.103025   28319 logs.go:274] 0 containers: []
	W0601 11:56:14.103036   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:14.103091   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:14.132511   28319 logs.go:274] 0 containers: []
	W0601 11:56:14.132524   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:14.132583   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:14.162337   28319 logs.go:274] 0 containers: []
	W0601 11:56:14.162349   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:14.162404   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:14.192882   28319 logs.go:274] 0 containers: []
	W0601 11:56:14.192896   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:14.192952   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:14.222438   28319 logs.go:274] 0 containers: []
	W0601 11:56:14.222451   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:14.222506   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:14.252850   28319 logs.go:274] 0 containers: []
	W0601 11:56:14.252863   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:14.252871   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:14.252878   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:14.265274   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:14.265300   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:16.319655   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.0543632s)
	I0601 11:56:16.319773   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:16.319781   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:16.360376   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:16.360390   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:16.373260   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:16.373293   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:16.428799   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:18.930318   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:19.010706   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:19.041493   28319 logs.go:274] 0 containers: []
	W0601 11:56:19.041505   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:19.041566   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:19.071367   28319 logs.go:274] 0 containers: []
	W0601 11:56:19.071377   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:19.071438   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:19.102204   28319 logs.go:274] 0 containers: []
	W0601 11:56:19.102217   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:19.102273   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:19.134887   28319 logs.go:274] 0 containers: []
	W0601 11:56:19.134899   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:19.134960   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:19.165401   28319 logs.go:274] 0 containers: []
	W0601 11:56:19.165414   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:19.165481   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:19.199809   28319 logs.go:274] 0 containers: []
	W0601 11:56:19.199820   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:19.199917   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:19.231653   28319 logs.go:274] 0 containers: []
	W0601 11:56:19.231665   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:19.231722   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:19.261391   28319 logs.go:274] 0 containers: []
	W0601 11:56:19.261403   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:19.261410   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:19.261416   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:19.304944   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:19.304958   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:19.316813   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:19.316825   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:19.372616   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:19.372627   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:19.372633   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:19.385307   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:19.385318   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:21.446084   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060778195s)
	I0601 11:56:23.946972   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:24.009400   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:24.039656   28319 logs.go:274] 0 containers: []
	W0601 11:56:24.039669   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:24.039728   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:24.070582   28319 logs.go:274] 0 containers: []
	W0601 11:56:24.070594   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:24.070651   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:24.100855   28319 logs.go:274] 0 containers: []
	W0601 11:56:24.100867   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:24.100920   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:24.131557   28319 logs.go:274] 0 containers: []
	W0601 11:56:24.131567   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:24.131627   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:24.161584   28319 logs.go:274] 0 containers: []
	W0601 11:56:24.161596   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:24.161652   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:24.191550   28319 logs.go:274] 0 containers: []
	W0601 11:56:24.191562   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:24.191632   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:24.223779   28319 logs.go:274] 0 containers: []
	W0601 11:56:24.223792   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:24.223849   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:24.254796   28319 logs.go:274] 0 containers: []
	W0601 11:56:24.254809   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:24.254816   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:24.254823   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:24.299122   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:24.299137   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:24.311260   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:24.311276   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:24.366958   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:24.366989   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:24.366995   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:24.380157   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:24.380171   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:26.434527   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054347865s)
	I0601 11:56:28.934821   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:29.010609   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:29.042687   28319 logs.go:274] 0 containers: []
	W0601 11:56:29.042700   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:29.042757   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:29.071650   28319 logs.go:274] 0 containers: []
	W0601 11:56:29.071663   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:29.071720   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:29.100444   28319 logs.go:274] 0 containers: []
	W0601 11:56:29.100456   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:29.100516   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:29.130300   28319 logs.go:274] 0 containers: []
	W0601 11:56:29.130313   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:29.130370   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:29.160069   28319 logs.go:274] 0 containers: []
	W0601 11:56:29.160081   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:29.160136   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:29.189354   28319 logs.go:274] 0 containers: []
	W0601 11:56:29.189366   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:29.189420   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:29.218871   28319 logs.go:274] 0 containers: []
	W0601 11:56:29.218883   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:29.218938   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:29.249986   28319 logs.go:274] 0 containers: []
	W0601 11:56:29.249998   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:29.250005   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:29.250011   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:29.289956   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:29.289969   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:29.301893   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:29.301922   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:29.354235   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:29.354260   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:29.354288   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:29.367183   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:29.367196   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:31.425251   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058068657s)
	I0601 11:56:33.925564   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:34.008864   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:34.040390   28319 logs.go:274] 0 containers: []
	W0601 11:56:34.040403   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:34.040457   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:34.070772   28319 logs.go:274] 0 containers: []
	W0601 11:56:34.070785   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:34.070845   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:34.100100   28319 logs.go:274] 0 containers: []
	W0601 11:56:34.100115   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:34.100189   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:34.131817   28319 logs.go:274] 0 containers: []
	W0601 11:56:34.131832   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:34.131891   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:34.165170   28319 logs.go:274] 0 containers: []
	W0601 11:56:34.165182   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:34.165240   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:34.196333   28319 logs.go:274] 0 containers: []
	W0601 11:56:34.196346   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:34.196401   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:34.227456   28319 logs.go:274] 0 containers: []
	W0601 11:56:34.227468   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:34.227522   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:34.255880   28319 logs.go:274] 0 containers: []
	W0601 11:56:34.255896   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:34.255905   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:34.255911   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:36.313109   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057210284s)
	I0601 11:56:36.313220   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:36.313228   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:36.355277   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:36.355295   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:36.367936   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:36.367949   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:36.427265   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:36.427277   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:36.427284   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:38.944432   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:39.010467   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:39.042318   28319 logs.go:274] 0 containers: []
	W0601 11:56:39.042330   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:39.042389   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:39.071800   28319 logs.go:274] 0 containers: []
	W0601 11:56:39.071811   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:39.071865   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:39.102235   28319 logs.go:274] 0 containers: []
	W0601 11:56:39.102247   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:39.102304   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:39.133642   28319 logs.go:274] 0 containers: []
	W0601 11:56:39.133655   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:39.133711   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:39.162183   28319 logs.go:274] 0 containers: []
	W0601 11:56:39.162215   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:39.162274   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:39.192299   28319 logs.go:274] 0 containers: []
	W0601 11:56:39.192332   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:39.192402   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:39.224060   28319 logs.go:274] 0 containers: []
	W0601 11:56:39.224073   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:39.224128   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:39.254137   28319 logs.go:274] 0 containers: []
	W0601 11:56:39.254151   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:39.254157   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:39.254164   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:39.296037   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:39.296050   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:39.307439   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:39.307450   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:39.365141   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:39.365151   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:39.365165   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:39.378713   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:39.378727   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:41.442670   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.063954081s)
	I0601 11:56:43.943321   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:44.009335   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:44.039950   28319 logs.go:274] 0 containers: []
	W0601 11:56:44.039961   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:44.040015   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:44.069074   28319 logs.go:274] 0 containers: []
	W0601 11:56:44.069087   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:44.069170   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:44.098171   28319 logs.go:274] 0 containers: []
	W0601 11:56:44.098184   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:44.098242   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:44.127158   28319 logs.go:274] 0 containers: []
	W0601 11:56:44.127170   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:44.127231   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:44.158530   28319 logs.go:274] 0 containers: []
	W0601 11:56:44.158543   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:44.158600   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:44.187857   28319 logs.go:274] 0 containers: []
	W0601 11:56:44.187869   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:44.187927   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:44.217215   28319 logs.go:274] 0 containers: []
	W0601 11:56:44.217228   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:44.217282   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:44.251676   28319 logs.go:274] 0 containers: []
	W0601 11:56:44.251689   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:44.251697   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:44.251703   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:44.296360   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:44.296377   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:44.308411   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:44.308422   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:44.363146   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:44.363158   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:44.363165   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:44.375992   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:44.376005   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:46.429887   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053894829s)
	I0601 11:56:48.930355   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:49.010017   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:49.040810   28319 logs.go:274] 0 containers: []
	W0601 11:56:49.040823   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:49.040878   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:49.069024   28319 logs.go:274] 0 containers: []
	W0601 11:56:49.069037   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:49.069090   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:49.100505   28319 logs.go:274] 0 containers: []
	W0601 11:56:49.100519   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:49.100582   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:49.133348   28319 logs.go:274] 0 containers: []
	W0601 11:56:49.133361   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:49.133416   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:49.162816   28319 logs.go:274] 0 containers: []
	W0601 11:56:49.162828   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:49.162886   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:49.194148   28319 logs.go:274] 0 containers: []
	W0601 11:56:49.194160   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:49.194216   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:49.223792   28319 logs.go:274] 0 containers: []
	W0601 11:56:49.223804   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:49.223861   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:49.254312   28319 logs.go:274] 0 containers: []
	W0601 11:56:49.254325   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:49.254332   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:49.254339   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:49.297715   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:49.297732   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:49.309499   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:49.309514   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:49.361498   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:49.361512   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:49.361519   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:49.374038   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:49.374050   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:51.428011   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053974706s)
	I0601 11:56:53.928463   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:54.010281   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:54.041861   28319 logs.go:274] 0 containers: []
	W0601 11:56:54.041873   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:54.041925   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:54.070132   28319 logs.go:274] 0 containers: []
	W0601 11:56:54.070144   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:54.070203   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:54.100461   28319 logs.go:274] 0 containers: []
	W0601 11:56:54.100473   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:54.100529   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:54.129880   28319 logs.go:274] 0 containers: []
	W0601 11:56:54.129891   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:54.129953   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:54.158973   28319 logs.go:274] 0 containers: []
	W0601 11:56:54.158987   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:54.159041   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:54.189002   28319 logs.go:274] 0 containers: []
	W0601 11:56:54.189013   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:54.189069   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:54.219965   28319 logs.go:274] 0 containers: []
	W0601 11:56:54.219978   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:54.220032   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:54.250636   28319 logs.go:274] 0 containers: []
	W0601 11:56:54.250647   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:54.250655   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:54.250664   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:54.294346   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:54.294360   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:54.306971   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:54.306984   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:54.362857   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:54.362870   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:54.362878   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:54.376322   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:54.376337   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:56.432087   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055762931s)
	I0601 11:56:58.933231   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:59.010295   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:59.041877   28319 logs.go:274] 0 containers: []
	W0601 11:56:59.041889   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:59.041943   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:59.070763   28319 logs.go:274] 0 containers: []
	W0601 11:56:59.070781   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:59.070837   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:59.100715   28319 logs.go:274] 0 containers: []
	W0601 11:56:59.100727   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:59.100786   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:59.130622   28319 logs.go:274] 0 containers: []
	W0601 11:56:59.130634   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:59.130689   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:59.161860   28319 logs.go:274] 0 containers: []
	W0601 11:56:59.161873   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:59.161927   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:59.190790   28319 logs.go:274] 0 containers: []
	W0601 11:56:59.190804   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:59.190859   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:59.219375   28319 logs.go:274] 0 containers: []
	W0601 11:56:59.219387   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:59.219442   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:59.249583   28319 logs.go:274] 0 containers: []
	W0601 11:56:59.249596   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:59.249604   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:59.249611   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:59.291437   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:59.291452   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:59.303657   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:59.303668   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:59.357073   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:59.357084   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:59.357091   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:59.369377   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:59.369390   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:01.425646   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056269873s)
	I0601 11:57:03.925844   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:04.010201   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:57:04.041233   28319 logs.go:274] 0 containers: []
	W0601 11:57:04.041245   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:57:04.041322   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:57:04.070072   28319 logs.go:274] 0 containers: []
	W0601 11:57:04.070086   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:57:04.070153   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:57:04.100335   28319 logs.go:274] 0 containers: []
	W0601 11:57:04.100354   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:57:04.100437   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:57:04.130281   28319 logs.go:274] 0 containers: []
	W0601 11:57:04.130293   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:57:04.130352   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:57:04.167795   28319 logs.go:274] 0 containers: []
	W0601 11:57:04.167807   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:57:04.167928   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:57:04.197871   28319 logs.go:274] 0 containers: []
	W0601 11:57:04.197884   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:57:04.197940   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:57:04.228277   28319 logs.go:274] 0 containers: []
	W0601 11:57:04.228288   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:57:04.228345   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:57:04.258092   28319 logs.go:274] 0 containers: []
	W0601 11:57:04.258104   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:57:04.258111   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:57:04.258118   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:57:04.311843   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:57:04.311868   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:57:04.311874   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:57:04.324627   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:57:04.324640   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:06.380068   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055441139s)
	I0601 11:57:06.380181   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:57:06.380188   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:57:06.423000   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:57:06.423017   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:57:08.935789   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:09.008127   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:57:09.038879   28319 logs.go:274] 0 containers: []
	W0601 11:57:09.038891   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:57:09.038947   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:57:09.068291   28319 logs.go:274] 0 containers: []
	W0601 11:57:09.068306   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:57:09.068360   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:57:09.096958   28319 logs.go:274] 0 containers: []
	W0601 11:57:09.096969   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:57:09.097039   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:57:09.126729   28319 logs.go:274] 0 containers: []
	W0601 11:57:09.126741   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:57:09.126798   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:57:09.156004   28319 logs.go:274] 0 containers: []
	W0601 11:57:09.156015   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:57:09.156095   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:57:09.184629   28319 logs.go:274] 0 containers: []
	W0601 11:57:09.184642   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:57:09.184699   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:57:09.214073   28319 logs.go:274] 0 containers: []
	W0601 11:57:09.214085   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:57:09.214146   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:57:09.243550   28319 logs.go:274] 0 containers: []
	W0601 11:57:09.243562   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:57:09.243569   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:57:09.243576   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:57:09.286219   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:57:09.286233   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:57:09.298176   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:57:09.298188   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:57:09.352783   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:57:09.352796   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:57:09.352805   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:57:09.366089   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:57:09.366102   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:11.424220   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05813202s)
	I0601 11:57:13.925524   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:14.010071   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:57:14.041352   28319 logs.go:274] 0 containers: []
	W0601 11:57:14.041365   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:57:14.041423   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:57:14.071470   28319 logs.go:274] 0 containers: []
	W0601 11:57:14.071482   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:57:14.071539   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:57:14.100965   28319 logs.go:274] 0 containers: []
	W0601 11:57:14.100977   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:57:14.101111   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:57:14.129799   28319 logs.go:274] 0 containers: []
	W0601 11:57:14.129810   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:57:14.129863   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:57:14.159841   28319 logs.go:274] 0 containers: []
	W0601 11:57:14.159852   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:57:14.159908   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:57:14.190255   28319 logs.go:274] 0 containers: []
	W0601 11:57:14.190270   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:57:14.190341   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:57:14.219539   28319 logs.go:274] 0 containers: []
	W0601 11:57:14.219552   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:57:14.219607   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:57:14.247896   28319 logs.go:274] 0 containers: []
	W0601 11:57:14.247930   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:57:14.247937   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:57:14.247945   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:57:14.291044   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:57:14.291058   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:57:14.304512   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:57:14.304523   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:57:14.356717   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:57:14.356731   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:57:14.356738   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:57:14.368729   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:57:14.368740   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:16.428777   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060048525s)
	I0601 11:57:18.929035   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:19.008006   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:57:19.040365   28319 logs.go:274] 0 containers: []
	W0601 11:57:19.040380   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:57:19.040440   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:57:19.073546   28319 logs.go:274] 0 containers: []
	W0601 11:57:19.073561   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:57:19.073626   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:57:19.108192   28319 logs.go:274] 0 containers: []
	W0601 11:57:19.108212   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:57:19.108276   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:57:19.142430   28319 logs.go:274] 0 containers: []
	W0601 11:57:19.142443   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:57:19.142538   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:57:19.175636   28319 logs.go:274] 0 containers: []
	W0601 11:57:19.175650   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:57:19.175719   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:57:19.208195   28319 logs.go:274] 0 containers: []
	W0601 11:57:19.208209   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:57:19.208267   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:57:19.240564   28319 logs.go:274] 0 containers: []
	W0601 11:57:19.240576   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:57:19.240633   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:57:19.273419   28319 logs.go:274] 0 containers: []
	W0601 11:57:19.273432   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:57:19.273439   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:57:19.273446   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:57:19.331449   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:57:19.331463   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:57:19.331471   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:57:19.346208   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:57:19.346222   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:21.407126   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060916068s)
	I0601 11:57:21.407235   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:57:21.407242   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:57:21.450235   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:57:21.450250   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:57:23.962515   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:24.007999   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:57:24.046910   28319 logs.go:274] 0 containers: []
	W0601 11:57:24.046922   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:57:24.046977   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:57:24.078502   28319 logs.go:274] 0 containers: []
	W0601 11:57:24.078515   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:57:24.078608   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:57:24.111688   28319 logs.go:274] 0 containers: []
	W0601 11:57:24.111701   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:57:24.111764   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:57:24.143708   28319 logs.go:274] 0 containers: []
	W0601 11:57:24.143721   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:57:24.143783   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:57:24.175299   28319 logs.go:274] 0 containers: []
	W0601 11:57:24.175313   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:57:24.175387   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:57:24.210853   28319 logs.go:274] 0 containers: []
	W0601 11:57:24.210866   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:57:24.210936   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:57:24.245012   28319 logs.go:274] 0 containers: []
	W0601 11:57:24.245026   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:57:24.245095   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:57:24.281872   28319 logs.go:274] 0 containers: []
	W0601 11:57:24.281885   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:57:24.281892   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:57:24.281899   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:57:24.299283   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:57:24.299300   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:26.356685   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057383504s)
	I0601 11:57:26.356862   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:57:26.356871   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:57:26.401842   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:57:26.401859   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:57:26.414869   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:57:26.414883   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:57:26.467468   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:57:28.967580   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:29.008160   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:57:29.040269   28319 logs.go:274] 0 containers: []
	W0601 11:57:29.040281   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:57:29.040356   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:57:29.072206   28319 logs.go:274] 0 containers: []
	W0601 11:57:29.072220   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:57:29.072281   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:57:29.105279   28319 logs.go:274] 0 containers: []
	W0601 11:57:29.105291   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:57:29.105349   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:57:29.134791   28319 logs.go:274] 0 containers: []
	W0601 11:57:29.134804   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:57:29.134860   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:57:29.164913   28319 logs.go:274] 0 containers: []
	W0601 11:57:29.164925   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:57:29.164979   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:57:29.194121   28319 logs.go:274] 0 containers: []
	W0601 11:57:29.194134   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:57:29.194190   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:57:29.224082   28319 logs.go:274] 0 containers: []
	W0601 11:57:29.224094   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:57:29.224148   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:57:29.254968   28319 logs.go:274] 0 containers: []
	W0601 11:57:29.255008   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:57:29.255015   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:57:29.255022   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:57:29.267556   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:57:29.267568   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:31.323029   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055474965s)
	I0601 11:57:31.323132   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:57:31.323140   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:57:31.365311   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:57:31.365325   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:57:31.377327   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:57:31.377341   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:57:31.435595   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:57:33.936600   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:34.008912   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:57:34.040625   28319 logs.go:274] 0 containers: []
	W0601 11:57:34.040639   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:57:34.040694   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:57:34.072501   28319 logs.go:274] 0 containers: []
	W0601 11:57:34.072513   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:57:34.072569   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:57:34.104579   28319 logs.go:274] 0 containers: []
	W0601 11:57:34.104591   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:57:34.104653   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:57:34.135775   28319 logs.go:274] 0 containers: []
	W0601 11:57:34.135787   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:57:34.135845   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:57:34.166312   28319 logs.go:274] 0 containers: []
	W0601 11:57:34.166323   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:57:34.166381   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:57:34.195560   28319 logs.go:274] 0 containers: []
	W0601 11:57:34.195572   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:57:34.195627   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:57:34.224692   28319 logs.go:274] 0 containers: []
	W0601 11:57:34.224703   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:57:34.224765   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:57:34.255698   28319 logs.go:274] 0 containers: []
	W0601 11:57:34.255710   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:57:34.255717   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:57:34.255727   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:57:34.300652   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:57:34.300667   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:57:34.313320   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:57:34.313334   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:57:34.368671   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:57:34.368683   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:57:34.368690   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:57:34.381336   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:57:34.381349   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:36.441359   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060024322s)
	I0601 11:57:38.943165   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:39.007618   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:57:39.046794   28319 logs.go:274] 0 containers: []
	W0601 11:57:39.046808   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:57:39.046868   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:57:39.079598   28319 logs.go:274] 0 containers: []
	W0601 11:57:39.079612   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:57:39.079683   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:57:39.109592   28319 logs.go:274] 0 containers: []
	W0601 11:57:39.109604   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:57:39.109661   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:57:39.140083   28319 logs.go:274] 0 containers: []
	W0601 11:57:39.140095   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:57:39.140151   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:57:39.170917   28319 logs.go:274] 0 containers: []
	W0601 11:57:39.170929   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:57:39.170987   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:57:39.200633   28319 logs.go:274] 0 containers: []
	W0601 11:57:39.200644   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:57:39.200698   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:57:39.232233   28319 logs.go:274] 0 containers: []
	W0601 11:57:39.232274   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:57:39.232332   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:57:39.262769   28319 logs.go:274] 0 containers: []
	W0601 11:57:39.262781   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:57:39.262788   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:57:39.262794   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:41.329410   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.066626686s)
	I0601 11:57:41.329597   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:57:41.329608   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:57:41.383544   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:57:41.383564   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:57:41.408721   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:57:41.408743   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:57:41.509315   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:57:41.509346   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:57:41.509369   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:57:44.030515   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:44.507644   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:57:44.537454   28319 logs.go:274] 0 containers: []
	W0601 11:57:44.537481   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:57:44.537554   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:57:44.568183   28319 logs.go:274] 0 containers: []
	W0601 11:57:44.568197   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:57:44.568261   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:57:44.599536   28319 logs.go:274] 0 containers: []
	W0601 11:57:44.599547   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:57:44.599606   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:57:44.630140   28319 logs.go:274] 0 containers: []
	W0601 11:57:44.630154   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:57:44.630217   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:57:44.660777   28319 logs.go:274] 0 containers: []
	W0601 11:57:44.660790   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:57:44.660846   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:57:44.691042   28319 logs.go:274] 0 containers: []
	W0601 11:57:44.691055   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:57:44.691143   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:57:44.720629   28319 logs.go:274] 0 containers: []
	W0601 11:57:44.720641   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:57:44.720699   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:57:44.750426   28319 logs.go:274] 0 containers: []
	W0601 11:57:44.750438   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:57:44.750445   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:57:44.750452   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:57:44.765309   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:57:44.765324   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:46.833468   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.06815266s)
	I0601 11:57:46.833611   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:57:46.833623   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:57:46.894511   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:57:46.894539   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:57:46.907075   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:57:46.907090   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:57:46.971671   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:57:49.472151   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:49.507935   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:57:49.537886   28319 logs.go:274] 0 containers: []
	W0601 11:57:49.537898   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:57:49.537960   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:57:49.568803   28319 logs.go:274] 0 containers: []
	W0601 11:57:49.568816   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:57:49.568872   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:57:49.598891   28319 logs.go:274] 0 containers: []
	W0601 11:57:49.598903   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:57:49.598962   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:57:49.628803   28319 logs.go:274] 0 containers: []
	W0601 11:57:49.628815   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:57:49.628874   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:57:49.660107   28319 logs.go:274] 0 containers: []
	W0601 11:57:49.660118   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:57:49.660209   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:57:49.691421   28319 logs.go:274] 0 containers: []
	W0601 11:57:49.691437   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:57:49.691507   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:57:49.722844   28319 logs.go:274] 0 containers: []
	W0601 11:57:49.722857   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:57:49.722911   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:57:49.755171   28319 logs.go:274] 0 containers: []
	W0601 11:57:49.755183   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:57:49.755191   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:57:49.755211   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:57:49.768071   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:57:49.768082   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:51.830872   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.06280221s)
	I0601 11:57:51.830991   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:57:51.830999   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:57:51.895350   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:57:51.895372   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:57:51.910561   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:57:51.910601   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:57:51.975211   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:57:54.475645   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:54.507404   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:57:54.546927   28319 logs.go:274] 0 containers: []
	W0601 11:57:54.546940   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:57:54.547000   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:57:54.579713   28319 logs.go:274] 0 containers: []
	W0601 11:57:54.579728   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:57:54.579797   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:57:54.614843   28319 logs.go:274] 0 containers: []
	W0601 11:57:54.614860   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:57:54.614948   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:57:54.651551   28319 logs.go:274] 0 containers: []
	W0601 11:57:54.651565   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:57:54.651624   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:57:54.687625   28319 logs.go:274] 0 containers: []
	W0601 11:57:54.687640   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:57:54.687712   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:57:54.723794   28319 logs.go:274] 0 containers: []
	W0601 11:57:54.723808   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:57:54.723872   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:57:54.759036   28319 logs.go:274] 0 containers: []
	W0601 11:57:54.759050   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:57:54.759111   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:57:54.791361   28319 logs.go:274] 0 containers: []
	W0601 11:57:54.791375   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:57:54.791382   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:57:54.791390   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:57:54.839700   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:57:54.839716   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:57:54.854532   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:57:54.854547   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:57:54.915142   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:57:54.915157   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:57:54.915164   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:57:54.928393   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:57:54.928405   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:56.983268   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054875531s)
	I0601 11:57:59.485573   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:59.496486   28319 kubeadm.go:630] restartCluster took 4m4.930290056s
	W0601 11:57:59.496562   28319 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0601 11:57:59.496576   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 11:57:59.913633   28319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:57:59.923079   28319 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:57:59.931076   28319 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:57:59.931127   28319 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:57:59.939179   28319 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:57:59.939204   28319 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:58:00.683895   28319 out.go:204]   - Generating certificates and keys ...
	I0601 11:58:01.523528   28319 out.go:204]   - Booting up control plane ...
	W0601 11:59:56.436173   28319 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0601 11:59:56.436202   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 11:59:56.858184   28319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:59:56.868053   28319 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:59:56.868108   28319 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:59:56.875717   28319 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:59:56.875735   28319 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:59:57.620797   28319 out.go:204]   - Generating certificates and keys ...
	I0601 11:59:58.294591   28319 out.go:204]   - Booting up control plane ...
	I0601 12:01:53.209412   28319 kubeadm.go:397] StartCluster complete in 7m58.682761983s
	I0601 12:01:53.209495   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 12:01:53.239013   28319 logs.go:274] 0 containers: []
	W0601 12:01:53.239025   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 12:01:53.239081   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 12:01:53.268562   28319 logs.go:274] 0 containers: []
	W0601 12:01:53.268573   28319 logs.go:276] No container was found matching "etcd"
	I0601 12:01:53.268647   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 12:01:53.300274   28319 logs.go:274] 0 containers: []
	W0601 12:01:53.300286   28319 logs.go:276] No container was found matching "coredns"
	I0601 12:01:53.300359   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 12:01:53.329677   28319 logs.go:274] 0 containers: []
	W0601 12:01:53.329689   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 12:01:53.329746   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 12:01:53.361469   28319 logs.go:274] 0 containers: []
	W0601 12:01:53.361481   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 12:01:53.361536   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 12:01:53.391374   28319 logs.go:274] 0 containers: []
	W0601 12:01:53.391386   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 12:01:53.391442   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 12:01:53.419646   28319 logs.go:274] 0 containers: []
	W0601 12:01:53.419659   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 12:01:53.419718   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 12:01:53.450297   28319 logs.go:274] 0 containers: []
	W0601 12:01:53.450310   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 12:01:53.450317   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 12:01:53.450324   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 12:01:53.493726   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 12:01:53.493744   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 12:01:53.506201   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 12:01:53.506214   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 12:01:53.559752   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 12:01:53.559763   28319 logs.go:123] Gathering logs for Docker ...
	I0601 12:01:53.559771   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 12:01:53.572451   28319 logs.go:123] Gathering logs for container status ...
	I0601 12:01:53.572466   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 12:01:55.624682   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052227376s)
	W0601 12:01:55.624796   28319 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0601 12:01:55.624810   28319 out.go:239] * 
	* 
	W0601 12:01:55.624940   28319 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0601 12:01:55.624954   28319 out.go:239] * 
	* 
	W0601 12:01:55.625525   28319 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 12:01:55.688737   28319 out.go:177] 
	W0601 12:01:55.731070   28319 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0601 12:01:55.731219   28319 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0601 12:01:55.731329   28319 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0601 12:01:55.794921   28319 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:261: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-20220601114806-16804 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220601114806-16804
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220601114806-16804:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273",
	        "Created": "2022-06-01T18:48:12.461821519Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 212829,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T18:53:51.165763227Z",
	            "FinishedAt": "2022-06-01T18:53:48.32715559Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273/hostname",
	        "HostsPath": "/var/lib/docker/containers/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273/hosts",
	        "LogPath": "/var/lib/docker/containers/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273-json.log",
	        "Name": "/old-k8s-version-20220601114806-16804",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220601114806-16804:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220601114806-16804",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/34025968d17a5ea4a956d84b5a5a083525af3a67c56680691bf072548c5ecfc2-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/34025968d17a5ea4a956d84b5a5a083525af3a67c56680691bf072548c5ecfc2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/34025968d17a5ea4a956d84b5a5a083525af3a67c56680691bf072548c5ecfc2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/34025968d17a5ea4a956d84b5a5a083525af3a67c56680691bf072548c5ecfc2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220601114806-16804",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220601114806-16804/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220601114806-16804",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220601114806-16804",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220601114806-16804",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "df15676c71a0eb8c1755841478abd978fa8d8f53d24ceed344774583d711d893",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59947"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59948"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59944"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59945"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59946"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/df15676c71a0",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220601114806-16804": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ff69f8f777d8",
	                        "old-k8s-version-20220601114806-16804"
	                    ],
	                    "NetworkID": "246cf6a028e4e11a14e92d87f31441d673c4de3a42936ed926f0c32bee110562",
	                    "EndpointID": "248cec2b4960c9be6d236f5305db55c60b48dd57301f892e0015a2ab70c18ccf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220601114806-16804 -n old-k8s-version-20220601114806-16804
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220601114806-16804 -n old-k8s-version-20220601114806-16804: exit status 2 (507.083912ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-20220601114806-16804 logs -n 25
E0601 12:01:59.892686   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601115057-16804/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-20220601114806-16804 logs -n 25: (3.54572601s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-----------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                 Profile                 |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-----------------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p bridge-20220601113004-16804                    | bridge-20220601113004-16804             | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:46 PDT | 01 Jun 22 11:46 PDT |
	| start   | -p                                                | kubenet-20220601113004-16804            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:46 PDT | 01 Jun 22 11:47 PDT |
	|         | kubenet-20220601113004-16804                      |                                         |         |                |                     |                     |
	|         | --memory=2048                                     |                                         |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                         |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |                |                     |                     |
	|         | --network-plugin=kubenet                          |                                         |         |                |                     |                     |
	|         | --driver=docker                                   |                                         |         |                |                     |                     |
	| ssh     | -p                                                | kubenet-20220601113004-16804            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:47 PDT | 01 Jun 22 11:47 PDT |
	|         | kubenet-20220601113004-16804                      |                                         |         |                |                     |                     |
	|         | pgrep -a kubelet                                  |                                         |         |                |                     |                     |
	| delete  | -p                                                | kubenet-20220601113004-16804            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:48 PDT | 01 Jun 22 11:48 PDT |
	|         | kubenet-20220601113004-16804                      |                                         |         |                |                     |                     |
	| start   | -p                                                | enable-default-cni-20220601113004-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:46 PDT | 01 Jun 22 11:50 PDT |
	|         | enable-default-cni-20220601113004-16804           |                                         |         |                |                     |                     |
	|         | --memory=2048 --alsologtostderr                   |                                         |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |                |                     |                     |
	|         | --enable-default-cni=true                         |                                         |         |                |                     |                     |
	|         | --driver=docker                                   |                                         |         |                |                     |                     |
	| ssh     | -p                                                | enable-default-cni-20220601113004-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:50 PDT | 01 Jun 22 11:50 PDT |
	|         | enable-default-cni-20220601113004-16804           |                                         |         |                |                     |                     |
	|         | pgrep -a kubelet                                  |                                         |         |                |                     |                     |
	| delete  | -p                                                | enable-default-cni-20220601113004-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:50 PDT | 01 Jun 22 11:50 PDT |
	|         | enable-default-cni-20220601113004-16804           |                                         |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:50 PDT | 01 Jun 22 11:51 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |                |                     |                     |
	|         | --driver=docker                                   |                                         |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                         |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:51 PDT | 01 Jun 22 11:51 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |                |                     |                     |
	| stop    | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:51 PDT | 01 Jun 22 11:52 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |                |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:52 PDT | 01 Jun 22 11:52 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |                |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220601114806-16804    | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:53 PDT | 01 Jun 22 11:53 PDT |
	|         | old-k8s-version-20220601114806-16804              |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |                |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220601114806-16804    | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:53 PDT | 01 Jun 22 11:53 PDT |
	|         | old-k8s-version-20220601114806-16804              |                                         |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:52 PDT | 01 Jun 22 11:57 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |                |                     |                     |
	|         | --driver=docker                                   |                                         |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                         |         |                |                     |                     |
	| ssh     | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                         |         |                |                     |                     |
	| pause   | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |                |                     |                     |
	| unpause | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |                |                     |                     |
	| logs    | no-preload-20220601115057-16804                   | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | logs -n 25                                        |                                         |         |                |                     |                     |
	| logs    | no-preload-20220601115057-16804                   | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | logs -n 25                                        |                                         |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	| start   | -p                                                | embed-certs-20220601115855-16804        | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:59 PDT |
	|         | embed-certs-20220601115855-16804                  |                                         |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |                |                     |                     |
	|         | --wait=true --embed-certs                         |                                         |         |                |                     |                     |
	|         | --driver=docker                                   |                                         |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                         |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220601115855-16804        | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:59 PDT | 01 Jun 22 11:59 PDT |
	|         | embed-certs-20220601115855-16804                  |                                         |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |                |                     |                     |
	| stop    | -p                                                | embed-certs-20220601115855-16804        | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:59 PDT | 01 Jun 22 11:59 PDT |
	|         | embed-certs-20220601115855-16804                  |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |                |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220601115855-16804        | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:59 PDT | 01 Jun 22 11:59 PDT |
	|         | embed-certs-20220601115855-16804                  |                                         |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |                |                     |                     |
	|---------|---------------------------------------------------|-----------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 11:59:59
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 11:59:59.653204   28829 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:59:59.653367   28829 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:59:59.653373   28829 out.go:309] Setting ErrFile to fd 2...
	I0601 11:59:59.653377   28829 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:59:59.653471   28829 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:59:59.653745   28829 out.go:303] Setting JSON to false
	I0601 11:59:59.668907   28829 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":8969,"bootTime":1654101030,"procs":354,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 11:59:59.669021   28829 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:59:59.692330   28829 out.go:177] * [embed-certs-20220601115855-16804] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 11:59:59.734931   28829 notify.go:193] Checking for updates...
	I0601 11:59:59.755632   28829 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:59:59.776895   28829 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:59:59.797902   28829 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 11:59:59.818891   28829 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:59:59.840237   28829 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:59:58.294591   28319 out.go:204]   - Booting up control plane ...
	I0601 11:59:59.862690   28829 config.go:178] Loaded profile config "embed-certs-20220601115855-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:59:59.863349   28829 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:59:59.936186   28829 docker.go:137] docker version: linux-20.10.14
	I0601 11:59:59.936326   28829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 12:00:00.071723   28829 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 19:00:00.020706131 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 12:00:00.115439   28829 out.go:177] * Using the docker driver based on existing profile
	I0601 12:00:00.136972   28829 start.go:284] selected driver: docker
	I0601 12:00:00.137021   28829 start.go:806] validating driver "docker" against &{Name:embed-certs-20220601115855-16804 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601115855-16804 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedu
ledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 12:00:00.137102   28829 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 12:00:00.139236   28829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 12:00:00.273893   28829 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 19:00:00.221448867 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 12:00:00.274092   28829 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 12:00:00.274108   28829 cni.go:95] Creating CNI manager for ""
	I0601 12:00:00.274119   28829 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 12:00:00.274130   28829 start_flags.go:306] config:
	{Name:embed-certs-20220601115855-16804 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601115855-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cl
uster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 12:00:00.317950   28829 out.go:177] * Starting control plane node embed-certs-20220601115855-16804 in cluster embed-certs-20220601115855-16804
	I0601 12:00:00.339702   28829 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 12:00:00.361619   28829 out.go:177] * Pulling base image ...
	I0601 12:00:00.403754   28829 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 12:00:00.403769   28829 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 12:00:00.403845   28829 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 12:00:00.403870   28829 cache.go:57] Caching tarball of preloaded images
	I0601 12:00:00.404060   28829 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 12:00:00.404081   28829 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 12:00:00.405150   28829 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601115855-16804/config.json ...
	I0601 12:00:00.473577   28829 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 12:00:00.473612   28829 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 12:00:00.473622   28829 cache.go:206] Successfully downloaded all kic artifacts
	I0601 12:00:00.473678   28829 start.go:352] acquiring machines lock for embed-certs-20220601115855-16804: {Name:mk196f5f4a80c33b64e542dea375820ba3ed670b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 12:00:00.473769   28829 start.go:356] acquired machines lock for "embed-certs-20220601115855-16804" in 61.526µs
	I0601 12:00:00.473799   28829 start.go:94] Skipping create...Using existing machine configuration
	I0601 12:00:00.473808   28829 fix.go:55] fixHost starting: 
	I0601 12:00:00.474098   28829 cli_runner.go:164] Run: docker container inspect embed-certs-20220601115855-16804 --format={{.State.Status}}
	I0601 12:00:00.546983   28829 fix.go:103] recreateIfNeeded on embed-certs-20220601115855-16804: state=Stopped err=<nil>
	W0601 12:00:00.547020   28829 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 12:00:00.590598   28829 out.go:177] * Restarting existing docker container for "embed-certs-20220601115855-16804" ...
	I0601 12:00:00.611803   28829 cli_runner.go:164] Run: docker start embed-certs-20220601115855-16804
	I0601 12:00:00.981301   28829 cli_runner.go:164] Run: docker container inspect embed-certs-20220601115855-16804 --format={{.State.Status}}
	I0601 12:00:01.057530   28829 kic.go:416] container "embed-certs-20220601115855-16804" state is running.
	I0601 12:00:01.058483   28829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220601115855-16804
	I0601 12:00:01.138894   28829 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601115855-16804/config.json ...
	I0601 12:00:01.139319   28829 machine.go:88] provisioning docker machine ...
	I0601 12:00:01.139343   28829 ubuntu.go:169] provisioning hostname "embed-certs-20220601115855-16804"
	I0601 12:00:01.139423   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:01.220339   28829 main.go:134] libmachine: Using SSH client type: native
	I0601 12:00:01.220539   28829 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 60747 <nil> <nil>}
	I0601 12:00:01.220567   28829 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220601115855-16804 && echo "embed-certs-20220601115855-16804" | sudo tee /etc/hostname
	I0601 12:00:01.352125   28829 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220601115855-16804
	
	I0601 12:00:01.352207   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:01.427439   28829 main.go:134] libmachine: Using SSH client type: native
	I0601 12:00:01.427585   28829 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 60747 <nil> <nil>}
	I0601 12:00:01.427600   28829 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220601115855-16804' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220601115855-16804/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220601115855-16804' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 12:00:01.544609   28829 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 12:00:01.544628   28829 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 12:00:01.544653   28829 ubuntu.go:177] setting up certificates
	I0601 12:00:01.544660   28829 provision.go:83] configureAuth start
	I0601 12:00:01.544721   28829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220601115855-16804
	I0601 12:00:01.621530   28829 provision.go:138] copyHostCerts
	I0601 12:00:01.621625   28829 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 12:00:01.621636   28829 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 12:00:01.621742   28829 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 12:00:01.621969   28829 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 12:00:01.621980   28829 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 12:00:01.622043   28829 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 12:00:01.622216   28829 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 12:00:01.622223   28829 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 12:00:01.622288   28829 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1675 bytes)
	I0601 12:00:01.622404   28829 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220601115855-16804 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220601115855-16804]
	I0601 12:00:01.850945   28829 provision.go:172] copyRemoteCerts
	I0601 12:00:01.851024   28829 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 12:00:01.851079   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:01.929859   28829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60747 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601115855-16804/id_rsa Username:docker}
	I0601 12:00:02.016851   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 12:00:02.037368   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 12:00:02.055389   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0601 12:00:02.077593   28829 provision.go:86] duration metric: configureAuth took 532.923535ms
	I0601 12:00:02.077613   28829 ubuntu.go:193] setting minikube options for container-runtime
	I0601 12:00:02.077867   28829 config.go:178] Loaded profile config "embed-certs-20220601115855-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 12:00:02.077925   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:02.152444   28829 main.go:134] libmachine: Using SSH client type: native
	I0601 12:00:02.152592   28829 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 60747 <nil> <nil>}
	I0601 12:00:02.152602   28829 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 12:00:02.272393   28829 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 12:00:02.272406   28829 ubuntu.go:71] root file system type: overlay
	I0601 12:00:02.272550   28829 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 12:00:02.272624   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:02.345039   28829 main.go:134] libmachine: Using SSH client type: native
	I0601 12:00:02.345239   28829 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 60747 <nil> <nil>}
	I0601 12:00:02.345322   28829 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 12:00:02.473536   28829 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 12:00:02.473632   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:02.547006   28829 main.go:134] libmachine: Using SSH client type: native
	I0601 12:00:02.547206   28829 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 60747 <nil> <nil>}
	I0601 12:00:02.547219   28829 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 12:00:02.668285   28829 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 12:00:02.668306   28829 machine.go:91] provisioned docker machine in 1.528998011s
	I0601 12:00:02.668317   28829 start.go:306] post-start starting for "embed-certs-20220601115855-16804" (driver="docker")
	I0601 12:00:02.668321   28829 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 12:00:02.668376   28829 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 12:00:02.668419   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:02.744308   28829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60747 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601115855-16804/id_rsa Username:docker}
	I0601 12:00:02.832162   28829 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 12:00:02.835671   28829 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 12:00:02.835684   28829 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 12:00:02.835691   28829 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 12:00:02.835696   28829 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 12:00:02.835704   28829 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 12:00:02.835822   28829 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 12:00:02.835969   28829 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem -> 168042.pem in /etc/ssl/certs
	I0601 12:00:02.836134   28829 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 12:00:02.843255   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /etc/ssl/certs/168042.pem (1708 bytes)
	I0601 12:00:02.861502   28829 start.go:309] post-start completed in 193.177974ms
	I0601 12:00:02.861575   28829 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 12:00:02.861682   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:02.936096   28829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60747 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601115855-16804/id_rsa Username:docker}
	I0601 12:00:03.020138   28829 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 12:00:03.024381   28829 fix.go:57] fixHost completed within 2.550601276s
	I0601 12:00:03.024393   28829 start.go:81] releasing machines lock for "embed-certs-20220601115855-16804", held for 2.550641205s
	I0601 12:00:03.024471   28829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220601115855-16804
	I0601 12:00:03.097794   28829 ssh_runner.go:195] Run: systemctl --version
	I0601 12:00:03.097795   28829 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 12:00:03.097869   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:03.097902   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:03.176095   28829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60747 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601115855-16804/id_rsa Username:docker}
	I0601 12:00:03.179173   28829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60747 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601115855-16804/id_rsa Username:docker}
	I0601 12:00:03.393941   28829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 12:00:03.405857   28829 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 12:00:03.415824   28829 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 12:00:03.415875   28829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 12:00:03.425026   28829 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 12:00:03.437823   28829 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 12:00:03.518418   28829 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 12:00:03.586389   28829 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 12:00:03.597266   28829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 12:00:03.669442   28829 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 12:00:03.679546   28829 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 12:00:03.715983   28829 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 12:00:03.793958   28829 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0601 12:00:03.794135   28829 cli_runner.go:164] Run: docker exec -t embed-certs-20220601115855-16804 dig +short host.docker.internal
	I0601 12:00:03.928920   28829 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 12:00:03.929017   28829 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 12:00:03.933477   28829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 12:00:03.943415   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:04.016419   28829 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 12:00:04.016501   28829 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 12:00:04.048821   28829 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0601 12:00:04.048836   28829 docker.go:541] Images already preloaded, skipping extraction
	I0601 12:00:04.048899   28829 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 12:00:04.079435   28829 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0601 12:00:04.079457   28829 cache_images.go:84] Images are preloaded, skipping loading
	I0601 12:00:04.079567   28829 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 12:00:04.154405   28829 cni.go:95] Creating CNI manager for ""
	I0601 12:00:04.154416   28829 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 12:00:04.154426   28829 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 12:00:04.154437   28829 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220601115855-16804 NodeName:embed-certs-20220601115855-16804 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:
/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 12:00:04.154550   28829 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "embed-certs-20220601115855-16804"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 12:00:04.154614   28829 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=embed-certs-20220601115855-16804 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601115855-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 12:00:04.154674   28829 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 12:00:04.162496   28829 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 12:00:04.162605   28829 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 12:00:04.169803   28829 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (358 bytes)
	I0601 12:00:04.182475   28829 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 12:00:04.196040   28829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2053 bytes)
	I0601 12:00:04.210349   28829 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0601 12:00:04.214249   28829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 12:00:04.224887   28829 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601115855-16804 for IP: 192.168.58.2
	I0601 12:00:04.225006   28829 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 12:00:04.225070   28829 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 12:00:04.225156   28829 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601115855-16804/client.key
	I0601 12:00:04.225217   28829 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601115855-16804/apiserver.key.cee25041
	I0601 12:00:04.225268   28829 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601115855-16804/proxy-client.key
	I0601 12:00:04.225483   28829 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem (1338 bytes)
	W0601 12:00:04.225526   28829 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804_empty.pem, impossibly tiny 0 bytes
	I0601 12:00:04.225542   28829 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1675 bytes)
	I0601 12:00:04.225573   28829 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 12:00:04.225606   28829 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 12:00:04.225635   28829 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1675 bytes)
	I0601 12:00:04.225702   28829 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem (1708 bytes)
	I0601 12:00:04.226272   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601115855-16804/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 12:00:04.245065   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601115855-16804/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 12:00:04.264844   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601115855-16804/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 12:00:04.283813   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601115855-16804/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0601 12:00:04.302400   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 12:00:04.320094   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 12:00:04.337340   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 12:00:04.355164   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0601 12:00:04.372566   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 12:00:04.390758   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem --> /usr/share/ca-certificates/16804.pem (1338 bytes)
	I0601 12:00:04.407937   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /usr/share/ca-certificates/168042.pem (1708 bytes)
	I0601 12:00:04.425147   28829 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 12:00:04.438402   28829 ssh_runner.go:195] Run: openssl version
	I0601 12:00:04.444064   28829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 12:00:04.452131   28829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 12:00:04.456181   28829 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0601 12:00:04.456224   28829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 12:00:04.461511   28829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 12:00:04.468902   28829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16804.pem && ln -fs /usr/share/ca-certificates/16804.pem /etc/ssl/certs/16804.pem"
	I0601 12:00:04.476746   28829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16804.pem
	I0601 12:00:04.480878   28829 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 18:01 /usr/share/ca-certificates/16804.pem
	I0601 12:00:04.480926   28829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16804.pem
	I0601 12:00:04.486478   28829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16804.pem /etc/ssl/certs/51391683.0"
	I0601 12:00:04.493830   28829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168042.pem && ln -fs /usr/share/ca-certificates/168042.pem /etc/ssl/certs/168042.pem"
	I0601 12:00:04.501614   28829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168042.pem
	I0601 12:00:04.505599   28829 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 18:01 /usr/share/ca-certificates/168042.pem
	I0601 12:00:04.505640   28829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168042.pem
	I0601 12:00:04.511112   28829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168042.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 12:00:04.518272   28829 kubeadm.go:395] StartCluster: {Name:embed-certs-20220601115855-16804 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601115855-16804 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expose
dPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 12:00:04.518372   28829 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 12:00:04.546843   28829 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 12:00:04.554437   28829 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 12:00:04.554453   28829 kubeadm.go:626] restartCluster start
	I0601 12:00:04.554494   28829 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 12:00:04.561477   28829 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:04.561586   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:04.636533   28829 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220601115855-16804" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 12:00:04.636800   28829 kubeconfig.go:127] "embed-certs-20220601115855-16804" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 12:00:04.637127   28829 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk924f4ba24fa74a0cb052299e0cc4e825b209a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 12:00:04.638462   28829 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 12:00:04.646150   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:04.646199   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:04.654404   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:04.877249   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:04.877380   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:04.888485   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:05.077954   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:05.078102   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:05.090777   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:05.277957   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:05.278185   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:05.288604   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:05.476559   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:05.476656   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:05.488394   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:05.677991   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:05.678216   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:05.689348   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:05.876473   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:05.876581   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:05.887319   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:06.078192   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:06.078404   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:06.088967   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:06.275971   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:06.276084   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:06.286277   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:06.476564   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:06.476653   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:06.487710   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:06.677961   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:06.678149   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:06.688765   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:06.878002   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:06.878195   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:06.888550   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:07.075946   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:07.076132   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:07.087777   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:07.276117   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:07.276185   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:07.284689   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:07.477252   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:07.477434   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:07.488159   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:07.677742   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:07.677844   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:07.688190   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:07.688199   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:07.688241   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:07.696107   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:07.696118   28829 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 12:00:07.696125   28829 kubeadm.go:1092] stopping kube-system containers ...
	I0601 12:00:07.696181   28829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 12:00:07.726742   28829 docker.go:442] Stopping containers: [54f727789abd 1a421477b475 d34c5263066b 4b5d8c649cd9 54ff8c39a3a3 d7c01b3e7bd3 aff02a265852 26c16b34697b 61e2850c4dc2 5c57a813ff5a f842c60a2bc5 e84f942430d3 8fa7e200ea41 d699653d0b64 0338f069b9af 8ea64f1a925b]
	I0601 12:00:07.726812   28829 ssh_runner.go:195] Run: docker stop 54f727789abd 1a421477b475 d34c5263066b 4b5d8c649cd9 54ff8c39a3a3 d7c01b3e7bd3 aff02a265852 26c16b34697b 61e2850c4dc2 5c57a813ff5a f842c60a2bc5 e84f942430d3 8fa7e200ea41 d699653d0b64 0338f069b9af 8ea64f1a925b
	I0601 12:00:07.758600   28829 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 12:00:07.769183   28829 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 12:00:07.777276   28829 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun  1 18:59 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jun  1 18:59 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Jun  1 18:59 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jun  1 18:59 /etc/kubernetes/scheduler.conf
	
	I0601 12:00:07.777325   28829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0601 12:00:07.785105   28829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0601 12:00:07.792774   28829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0601 12:00:07.800094   28829 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:07.800141   28829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0601 12:00:07.806961   28829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0601 12:00:07.814145   28829 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:07.814256   28829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0601 12:00:07.821393   28829 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 12:00:07.829055   28829 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 12:00:07.829066   28829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:00:07.875534   28829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:00:08.943797   28829 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.068258535s)
	I0601 12:00:08.943827   28829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:00:09.070381   28829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:00:09.117719   28829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:00:09.164707   28829 api_server.go:51] waiting for apiserver process to appear ...
	I0601 12:00:09.164770   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:00:09.676929   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:00:10.174847   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:00:10.675184   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:00:10.725618   28829 api_server.go:71] duration metric: took 1.560936283s to wait for apiserver process to appear ...
	I0601 12:00:10.725639   28829 api_server.go:87] waiting for apiserver healthz status ...
	I0601 12:00:10.725650   28829 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60746/healthz ...
	I0601 12:00:13.229293   28829 api_server.go:266] https://127.0.0.1:60746/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0601 12:00:13.229314   28829 api_server.go:102] status: https://127.0.0.1:60746/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0601 12:00:13.731491   28829 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60746/healthz ...
	I0601 12:00:13.739444   28829 api_server.go:266] https://127.0.0.1:60746/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 12:00:13.739457   28829 api_server.go:102] status: https://127.0.0.1:60746/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 12:00:14.229657   28829 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60746/healthz ...
	I0601 12:00:14.235866   28829 api_server.go:266] https://127.0.0.1:60746/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 12:00:14.235887   28829 api_server.go:102] status: https://127.0.0.1:60746/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 12:00:14.729449   28829 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60746/healthz ...
	I0601 12:00:14.735550   28829 api_server.go:266] https://127.0.0.1:60746/healthz returned 200:
	ok
	I0601 12:00:14.742074   28829 api_server.go:140] control plane version: v1.23.6
	I0601 12:00:14.742087   28829 api_server.go:130] duration metric: took 4.016491291s to wait for apiserver health ...
	I0601 12:00:14.742094   28829 cni.go:95] Creating CNI manager for ""
	I0601 12:00:14.742105   28829 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 12:00:14.742117   28829 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 12:00:14.749795   28829 system_pods.go:59] 8 kube-system pods found
	I0601 12:00:14.749812   28829 system_pods.go:61] "coredns-64897985d-hxbhf" [b1b3b467-12fe-4681-9a86-2855ba1e087a] Running
	I0601 12:00:14.749819   28829 system_pods.go:61] "etcd-embed-certs-20220601115855-16804" [9bdd83e2-edc8-4fd6-913e-c978b2a390a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0601 12:00:14.749823   28829 system_pods.go:61] "kube-apiserver-embed-certs-20220601115855-16804" [f01aa1c0-7c66-485f-8ae9-ea81ec72d61f] Running
	I0601 12:00:14.749830   28829 system_pods.go:61] "kube-controller-manager-embed-certs-20220601115855-16804" [4b44afb1-a477-4b52-af8c-9fbf9947dcc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0601 12:00:14.749836   28829 system_pods.go:61] "kube-proxy-hhbwv" [19408c1b-0db7-4ce4-bda8-b9ef78054eb5] Running
	I0601 12:00:14.749840   28829 system_pods.go:61] "kube-scheduler-embed-certs-20220601115855-16804" [1e8cf785-92e1-4068-add7-d217ee3fd625] Running
	I0601 12:00:14.749845   28829 system_pods.go:61] "metrics-server-b955d9d8-cv5b4" [8e155e5b-8d5c-4898-a95f-4d24d1c85714] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 12:00:14.749849   28829 system_pods.go:61] "storage-provisioner" [a3a21a47-4019-4f29-ac55-23ca85609de6] Running
	I0601 12:00:14.749853   28829 system_pods.go:74] duration metric: took 7.73298ms to wait for pod list to return data ...
	I0601 12:00:14.749859   28829 node_conditions.go:102] verifying NodePressure condition ...
	I0601 12:00:14.753342   28829 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 12:00:14.753360   28829 node_conditions.go:123] node cpu capacity is 6
	I0601 12:00:14.753372   28829 node_conditions.go:105] duration metric: took 3.509003ms to run NodePressure ...
	I0601 12:00:14.753387   28829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:00:14.902276   28829 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0601 12:00:14.908459   28829 kubeadm.go:777] kubelet initialised
	I0601 12:00:14.908471   28829 kubeadm.go:778] duration metric: took 6.181ms waiting for restarted kubelet to initialise ...
	I0601 12:00:14.908479   28829 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 12:00:14.914477   28829 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-hxbhf" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:14.919226   28829 pod_ready.go:92] pod "coredns-64897985d-hxbhf" in "kube-system" namespace has status "Ready":"True"
	I0601 12:00:14.919234   28829 pod_ready.go:81] duration metric: took 4.746053ms waiting for pod "coredns-64897985d-hxbhf" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:14.919239   28829 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:16.930345   28829 pod_ready.go:102] pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:18.930602   28829 pod_ready.go:102] pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:20.931370   28829 pod_ready.go:102] pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:23.429560   28829 pod_ready.go:102] pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:25.431054   28829 pod_ready.go:102] pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:27.431111   28829 pod_ready.go:102] pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:29.432632   28829 pod_ready.go:102] pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:29.930254   28829 pod_ready.go:92] pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:00:29.930266   28829 pod_ready.go:81] duration metric: took 15.011203247s waiting for pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:29.930272   28829 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:29.934493   28829 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:00:29.934501   28829 pod_ready.go:81] duration metric: took 4.223819ms waiting for pod "kube-apiserver-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:29.934506   28829 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:29.939831   28829 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:00:29.939839   28829 pod_ready.go:81] duration metric: took 5.322445ms waiting for pod "kube-controller-manager-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:29.939845   28829 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hhbwv" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:29.944936   28829 pod_ready.go:92] pod "kube-proxy-hhbwv" in "kube-system" namespace has status "Ready":"True"
	I0601 12:00:29.944945   28829 pod_ready.go:81] duration metric: took 5.09599ms waiting for pod "kube-proxy-hhbwv" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:29.944951   28829 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:29.950311   28829 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:00:29.950320   28829 pod_ready.go:81] duration metric: took 5.363535ms waiting for pod "kube-scheduler-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:29.950326   28829 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:32.337276   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:34.338997   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:36.838194   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:39.339010   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:41.837043   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:43.839697   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:46.337938   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:48.338698   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:50.837208   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:53.336924   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:55.337759   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:57.837371   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:59.838487   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:02.338943   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:04.839121   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:07.336527   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:09.835809   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:11.837079   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:13.838677   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:16.336928   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:18.837052   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:20.838148   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:23.335490   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:25.336728   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:27.839348   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:30.337601   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:32.838908   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:35.337845   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:37.836046   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:39.836118   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:41.836308   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:43.838508   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:46.338445   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:48.838271   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:50.838560   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:53.335328   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:53.209412   28319 kubeadm.go:397] StartCluster complete in 7m58.682761983s
	I0601 12:01:53.209495   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 12:01:53.239013   28319 logs.go:274] 0 containers: []
	W0601 12:01:53.239025   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 12:01:53.239081   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 12:01:53.268562   28319 logs.go:274] 0 containers: []
	W0601 12:01:53.268573   28319 logs.go:276] No container was found matching "etcd"
	I0601 12:01:53.268647   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 12:01:53.300274   28319 logs.go:274] 0 containers: []
	W0601 12:01:53.300286   28319 logs.go:276] No container was found matching "coredns"
	I0601 12:01:53.300359   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 12:01:53.329677   28319 logs.go:274] 0 containers: []
	W0601 12:01:53.329689   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 12:01:53.329746   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 12:01:53.361469   28319 logs.go:274] 0 containers: []
	W0601 12:01:53.361481   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 12:01:53.361536   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 12:01:53.391374   28319 logs.go:274] 0 containers: []
	W0601 12:01:53.391386   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 12:01:53.391442   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 12:01:53.419646   28319 logs.go:274] 0 containers: []
	W0601 12:01:53.419659   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 12:01:53.419718   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 12:01:53.450297   28319 logs.go:274] 0 containers: []
	W0601 12:01:53.450310   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 12:01:53.450317   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 12:01:53.450324   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 12:01:53.493726   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 12:01:53.493744   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 12:01:53.506201   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 12:01:53.506214   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 12:01:53.559752   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 12:01:53.559763   28319 logs.go:123] Gathering logs for Docker ...
	I0601 12:01:53.559771   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 12:01:53.572451   28319 logs.go:123] Gathering logs for container status ...
	I0601 12:01:53.572466   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 12:01:55.624682   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052227376s)
	W0601 12:01:55.624796   28319 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0601 12:01:55.624810   28319 out.go:239] * 
	W0601 12:01:55.624940   28319 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0601 12:01:55.624954   28319 out.go:239] * 
	W0601 12:01:55.625525   28319 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 12:01:55.688737   28319 out.go:177] 
	W0601 12:01:55.731070   28319 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0601 12:01:55.731219   28319 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0601 12:01:55.731329   28319 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0601 12:01:55.794921   28319 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-01 18:53:51 UTC, end at Wed 2022-06-01 19:01:57 UTC. --
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 systemd[1]: Starting Docker Application Container Engine...
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.457800407Z" level=info msg="Starting up"
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.459880544Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.459918540Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.459935542Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.459943396Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.461558394Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.461592263Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.461607683Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.461615678Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.467062010Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.471139789Z" level=info msg="Loading containers: start."
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.555493702Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.587145357Z" level=info msg="Loading containers: done."
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.597281456Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.597355151Z" level=info msg="Daemon has completed initialization"
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 systemd[1]: Started Docker Application Container Engine.
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.622139295Z" level=info msg="API listen on [::]:2376"
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.626019498Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* time="2022-06-01T19:01:59Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  19:01:59 up  1:04,  0 users,  load average: 0.16, 0.59, 0.91
	Linux old-k8s-version-20220601114806-16804 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 18:53:51 UTC, end at Wed 2022-06-01 19:02:00 UTC. --
	Jun 01 19:01:58 old-k8s-version-20220601114806-16804 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 01 19:01:58 old-k8s-version-20220601114806-16804 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 161.
	Jun 01 19:01:58 old-k8s-version-20220601114806-16804 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 01 19:01:58 old-k8s-version-20220601114806-16804 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 01 19:01:58 old-k8s-version-20220601114806-16804 kubelet[14315]: I0601 19:01:58.954556   14315 server.go:410] Version: v1.16.0
	Jun 01 19:01:58 old-k8s-version-20220601114806-16804 kubelet[14315]: I0601 19:01:58.954779   14315 plugins.go:100] No cloud provider specified.
	Jun 01 19:01:58 old-k8s-version-20220601114806-16804 kubelet[14315]: I0601 19:01:58.954791   14315 server.go:773] Client rotation is on, will bootstrap in background
	Jun 01 19:01:58 old-k8s-version-20220601114806-16804 kubelet[14315]: I0601 19:01:58.956331   14315 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 01 19:01:58 old-k8s-version-20220601114806-16804 kubelet[14315]: W0601 19:01:58.956973   14315 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jun 01 19:01:58 old-k8s-version-20220601114806-16804 kubelet[14315]: W0601 19:01:58.957041   14315 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jun 01 19:01:58 old-k8s-version-20220601114806-16804 kubelet[14315]: F0601 19:01:58.957067   14315 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jun 01 19:01:58 old-k8s-version-20220601114806-16804 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 01 19:01:58 old-k8s-version-20220601114806-16804 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 01 19:01:59 old-k8s-version-20220601114806-16804 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 162.
	Jun 01 19:01:59 old-k8s-version-20220601114806-16804 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 01 19:01:59 old-k8s-version-20220601114806-16804 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 01 19:01:59 old-k8s-version-20220601114806-16804 kubelet[14338]: I0601 19:01:59.709872   14338 server.go:410] Version: v1.16.0
	Jun 01 19:01:59 old-k8s-version-20220601114806-16804 kubelet[14338]: I0601 19:01:59.710227   14338 plugins.go:100] No cloud provider specified.
	Jun 01 19:01:59 old-k8s-version-20220601114806-16804 kubelet[14338]: I0601 19:01:59.710281   14338 server.go:773] Client rotation is on, will bootstrap in background
	Jun 01 19:01:59 old-k8s-version-20220601114806-16804 kubelet[14338]: I0601 19:01:59.711841   14338 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 01 19:01:59 old-k8s-version-20220601114806-16804 kubelet[14338]: W0601 19:01:59.712709   14338 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jun 01 19:01:59 old-k8s-version-20220601114806-16804 kubelet[14338]: W0601 19:01:59.712810   14338 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jun 01 19:01:59 old-k8s-version-20220601114806-16804 kubelet[14338]: F0601 19:01:59.712883   14338 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jun 01 19:01:59 old-k8s-version-20220601114806-16804 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 01 19:01:59 old-k8s-version-20220601114806-16804 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 12:01:59.720564   28952 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220601114806-16804 -n old-k8s-version-20220601114806-16804
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220601114806-16804 -n old-k8s-version-20220601114806-16804: exit status 2 (465.790138ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-20220601114806-16804" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (490.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (43.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-20220601115057-16804 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220601115057-16804 -n no-preload-20220601115057-16804
E0601 11:58:14.583778   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
E0601 11:58:21.819146   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601113004-16804/client.crt: no such file or directory
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220601115057-16804 -n no-preload-20220601115057-16804: exit status 2 (16.116585721s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220601115057-16804 -n no-preload-20220601115057-16804
E0601 11:58:28.270174   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601113004-16804/client.crt: no such file or directory
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220601115057-16804 -n no-preload-20220601115057-16804: exit status 2 (16.112035917s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-20220601115057-16804 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Done: out/minikube-darwin-amd64 unpause -p no-preload-20220601115057-16804 --alsologtostderr -v=1: (1.007601336s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220601115057-16804 -n no-preload-20220601115057-16804
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220601115057-16804 -n no-preload-20220601115057-16804
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220601115057-16804
helpers_test.go:235: (dbg) docker inspect no-preload-20220601115057-16804:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "640fdee1e7972f5863c1f9ee6da6b6baa2c98c8d612c746d3694bcbc653bfaf0",
	        "Created": "2022-06-01T18:50:59.851635845Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 208013,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T18:52:13.832116689Z",
	            "FinishedAt": "2022-06-01T18:52:11.806175726Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/640fdee1e7972f5863c1f9ee6da6b6baa2c98c8d612c746d3694bcbc653bfaf0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/640fdee1e7972f5863c1f9ee6da6b6baa2c98c8d612c746d3694bcbc653bfaf0/hostname",
	        "HostsPath": "/var/lib/docker/containers/640fdee1e7972f5863c1f9ee6da6b6baa2c98c8d612c746d3694bcbc653bfaf0/hosts",
	        "LogPath": "/var/lib/docker/containers/640fdee1e7972f5863c1f9ee6da6b6baa2c98c8d612c746d3694bcbc653bfaf0/640fdee1e7972f5863c1f9ee6da6b6baa2c98c8d612c746d3694bcbc653bfaf0-json.log",
	        "Name": "/no-preload-20220601115057-16804",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20220601115057-16804:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220601115057-16804",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/40bd3d957ee44f5492337fafff091d4e6fb20c62b70787d5fdbb2f62e561b608-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/40bd3d957ee44f5492337fafff091d4e6fb20c62b70787d5fdbb2f62e561b608/merged",
	                "UpperDir": "/var/lib/docker/overlay2/40bd3d957ee44f5492337fafff091d4e6fb20c62b70787d5fdbb2f62e561b608/diff",
	                "WorkDir": "/var/lib/docker/overlay2/40bd3d957ee44f5492337fafff091d4e6fb20c62b70787d5fdbb2f62e561b608/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220601115057-16804",
	                "Source": "/var/lib/docker/volumes/no-preload-20220601115057-16804/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220601115057-16804",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220601115057-16804",
	                "name.minikube.sigs.k8s.io": "no-preload-20220601115057-16804",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c26eec6a274e171d2c8c60c4d4901a86316f39a02fd9e47b1f2bd527076308d3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59705"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59706"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59707"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59708"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59709"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c26eec6a274e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220601115057-16804": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "640fdee1e797",
	                        "no-preload-20220601115057-16804"
	                    ],
	                    "NetworkID": "3a8d7e898b67819d09e7c626e20c10b519689f708220d091d47f03ea6749e9b3",
	                    "EndpointID": "acdb61ec9982dee0525dc6aefaae2ab513e16af32e41f91ff64319535a82f438",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220601115057-16804 -n no-preload-20220601115057-16804
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-20220601115057-16804 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p no-preload-20220601115057-16804 logs -n 25: (2.748960392s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-----------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                 Profile                 |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-----------------------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p auto-20220601113004-16804                      | auto-20220601113004-16804               | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:40 PDT | 01 Jun 22 11:45 PDT |
	|         | --memory=2048                                     |                                         |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                         |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |                |                     |                     |
	|         | --driver=docker                                   |                                         |         |                |                     |                     |
	| ssh     | -p auto-20220601113004-16804                      | auto-20220601113004-16804               | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:45 PDT | 01 Jun 22 11:45 PDT |
	|         | pgrep -a kubelet                                  |                                         |         |                |                     |                     |
	| delete  | -p auto-20220601113004-16804                      | auto-20220601113004-16804               | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:45 PDT | 01 Jun 22 11:45 PDT |
	| start   | -p false-20220601113005-16804                     | false-20220601113005-16804              | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:44 PDT | 01 Jun 22 11:46 PDT |
	|         | --memory=2048                                     |                                         |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                     |                                         |         |                |                     |                     |
	|         | --wait-timeout=5m --cni=false                     |                                         |         |                |                     |                     |
	|         | --driver=docker                                   |                                         |         |                |                     |                     |
	| ssh     | -p false-20220601113005-16804                     | false-20220601113005-16804              | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:46 PDT | 01 Jun 22 11:46 PDT |
	|         | pgrep -a kubelet                                  |                                         |         |                |                     |                     |
	| start   | -p bridge-20220601113004-16804                    | bridge-20220601113004-16804             | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:45 PDT | 01 Jun 22 11:46 PDT |
	|         | --memory=2048                                     |                                         |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                     |                                         |         |                |                     |                     |
	|         | --wait-timeout=5m --cni=bridge                    |                                         |         |                |                     |                     |
	|         | --driver=docker                                   |                                         |         |                |                     |                     |
	| ssh     | -p bridge-20220601113004-16804                    | bridge-20220601113004-16804             | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:46 PDT | 01 Jun 22 11:46 PDT |
	|         | pgrep -a kubelet                                  |                                         |         |                |                     |                     |
	| delete  | -p false-20220601113005-16804                     | false-20220601113005-16804              | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:46 PDT | 01 Jun 22 11:46 PDT |
	| delete  | -p bridge-20220601113004-16804                    | bridge-20220601113004-16804             | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:46 PDT | 01 Jun 22 11:46 PDT |
	| start   | -p                                                | kubenet-20220601113004-16804            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:46 PDT | 01 Jun 22 11:47 PDT |
	|         | kubenet-20220601113004-16804                      |                                         |         |                |                     |                     |
	|         | --memory=2048                                     |                                         |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                         |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |                |                     |                     |
	|         | --network-plugin=kubenet                          |                                         |         |                |                     |                     |
	|         | --driver=docker                                   |                                         |         |                |                     |                     |
	| ssh     | -p                                                | kubenet-20220601113004-16804            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:47 PDT | 01 Jun 22 11:47 PDT |
	|         | kubenet-20220601113004-16804                      |                                         |         |                |                     |                     |
	|         | pgrep -a kubelet                                  |                                         |         |                |                     |                     |
	| delete  | -p                                                | kubenet-20220601113004-16804            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:48 PDT | 01 Jun 22 11:48 PDT |
	|         | kubenet-20220601113004-16804                      |                                         |         |                |                     |                     |
	| start   | -p                                                | enable-default-cni-20220601113004-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:46 PDT | 01 Jun 22 11:50 PDT |
	|         | enable-default-cni-20220601113004-16804           |                                         |         |                |                     |                     |
	|         | --memory=2048 --alsologtostderr                   |                                         |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |                |                     |                     |
	|         | --enable-default-cni=true                         |                                         |         |                |                     |                     |
	|         | --driver=docker                                   |                                         |         |                |                     |                     |
	| ssh     | -p                                                | enable-default-cni-20220601113004-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:50 PDT | 01 Jun 22 11:50 PDT |
	|         | enable-default-cni-20220601113004-16804           |                                         |         |                |                     |                     |
	|         | pgrep -a kubelet                                  |                                         |         |                |                     |                     |
	| delete  | -p                                                | enable-default-cni-20220601113004-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:50 PDT | 01 Jun 22 11:50 PDT |
	|         | enable-default-cni-20220601113004-16804           |                                         |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:50 PDT | 01 Jun 22 11:51 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |                |                     |                     |
	|         | --driver=docker                                   |                                         |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                         |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:51 PDT | 01 Jun 22 11:51 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |                |                     |                     |
	| stop    | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:51 PDT | 01 Jun 22 11:52 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |                |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:52 PDT | 01 Jun 22 11:52 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |                |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220601114806-16804    | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:53 PDT | 01 Jun 22 11:53 PDT |
	|         | old-k8s-version-20220601114806-16804              |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |                |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220601114806-16804    | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:53 PDT | 01 Jun 22 11:53 PDT |
	|         | old-k8s-version-20220601114806-16804              |                                         |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:52 PDT | 01 Jun 22 11:57 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |                |                     |                     |
	|         | --driver=docker                                   |                                         |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                         |         |                |                     |                     |
	| ssh     | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                         |         |                |                     |                     |
	| pause   | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |                |                     |                     |
	| unpause | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |                |                     |                     |
	|---------|---------------------------------------------------|-----------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 11:53:49
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 11:53:49.869744   28319 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:53:49.870058   28319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:53:49.870063   28319 out.go:309] Setting ErrFile to fd 2...
	I0601 11:53:49.870067   28319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:53:49.870200   28319 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:53:49.870479   28319 out.go:303] Setting JSON to false
	I0601 11:53:49.885748   28319 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":8599,"bootTime":1654101030,"procs":364,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 11:53:49.885855   28319 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:53:49.907511   28319 out.go:177] * [old-k8s-version-20220601114806-16804] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 11:53:49.929263   28319 notify.go:193] Checking for updates...
	I0601 11:53:49.950161   28319 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:53:49.972303   28319 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:53:49.993555   28319 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 11:53:50.019203   28319 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:53:50.040605   28319 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:53:50.063270   28319 config.go:178] Loaded profile config "old-k8s-version-20220601114806-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0601 11:53:50.085267   28319 out.go:177] * Kubernetes 1.23.6 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.6
	I0601 11:53:50.106145   28319 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:53:50.179855   28319 docker.go:137] docker version: linux-20.10.14
	I0601 11:53:50.179965   28319 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:53:50.309210   28319 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 18:53:50.252610494 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:53:50.352718   28319 out.go:177] * Using the docker driver based on existing profile
	I0601 11:53:50.373802   28319 start.go:284] selected driver: docker
	I0601 11:53:50.373850   28319 start.go:806] validating driver "docker" against &{Name:old-k8s-version-20220601114806-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601114806-16804 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: M
ultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:53:50.374023   28319 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:53:50.377412   28319 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:53:50.505237   28319 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 18:53:50.450324655 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:53:50.505421   28319 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:53:50.505439   28319 cni.go:95] Creating CNI manager for ""
	I0601 11:53:50.505447   28319 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:53:50.505454   28319 start_flags.go:306] config:
	{Name:old-k8s-version-20220601114806-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601114806-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:53:50.527438   28319 out.go:177] * Starting control plane node old-k8s-version-20220601114806-16804 in cluster old-k8s-version-20220601114806-16804
	I0601 11:53:50.548947   28319 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:53:50.570241   28319 out.go:177] * Pulling base image ...
	I0601 11:53:50.613151   28319 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 11:53:50.613177   28319 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:53:50.613243   28319 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0601 11:53:50.613268   28319 cache.go:57] Caching tarball of preloaded images
	I0601 11:53:50.613461   28319 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:53:50.613486   28319 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0601 11:53:50.614580   28319 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/config.json ...
	I0601 11:53:50.680684   28319 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 11:53:50.680699   28319 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 11:53:50.680708   28319 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:53:50.680756   28319 start.go:352] acquiring machines lock for old-k8s-version-20220601114806-16804: {Name:mke97f71f3781c3324662a5c4576dc1a6ff166e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:53:50.680837   28319 start.go:356] acquired machines lock for "old-k8s-version-20220601114806-16804" in 61.411µs
	I0601 11:53:50.680855   28319 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:53:50.680865   28319 fix.go:55] fixHost starting: 
	I0601 11:53:50.681120   28319 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601114806-16804 --format={{.State.Status}}
	I0601 11:53:50.749601   28319 fix.go:103] recreateIfNeeded on old-k8s-version-20220601114806-16804: state=Stopped err=<nil>
	W0601 11:53:50.749634   28319 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 11:53:50.771624   28319 out.go:177] * Restarting existing docker container for "old-k8s-version-20220601114806-16804" ...
	I0601 11:53:47.937151   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:53:50.415910   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:53:50.793636   28319 cli_runner.go:164] Run: docker start old-k8s-version-20220601114806-16804
	I0601 11:53:51.159654   28319 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601114806-16804 --format={{.State.Status}}
	I0601 11:53:51.244535   28319 kic.go:416] container "old-k8s-version-20220601114806-16804" state is running.
	I0601 11:53:51.245201   28319 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220601114806-16804
	I0601 11:53:51.377956   28319 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/config.json ...
	I0601 11:53:51.378362   28319 machine.go:88] provisioning docker machine ...
	I0601 11:53:51.378386   28319 ubuntu.go:169] provisioning hostname "old-k8s-version-20220601114806-16804"
	I0601 11:53:51.378453   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:51.457140   28319 main.go:134] libmachine: Using SSH client type: native
	I0601 11:53:51.457343   28319 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 59947 <nil> <nil>}
	I0601 11:53:51.457358   28319 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220601114806-16804 && echo "old-k8s-version-20220601114806-16804" | sudo tee /etc/hostname
	I0601 11:53:51.580646   28319 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220601114806-16804
	
	I0601 11:53:51.580749   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:51.656628   28319 main.go:134] libmachine: Using SSH client type: native
	I0601 11:53:51.656782   28319 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 59947 <nil> <nil>}
	I0601 11:53:51.656796   28319 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220601114806-16804' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220601114806-16804/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220601114806-16804' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 11:53:51.776288   28319 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:53:51.776311   28319 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 11:53:51.776328   28319 ubuntu.go:177] setting up certificates
	I0601 11:53:51.776340   28319 provision.go:83] configureAuth start
	I0601 11:53:51.776419   28319 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220601114806-16804
	I0601 11:53:51.850151   28319 provision.go:138] copyHostCerts
	I0601 11:53:51.850269   28319 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 11:53:51.850278   28319 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 11:53:51.850366   28319 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 11:53:51.850623   28319 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 11:53:51.850633   28319 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 11:53:51.850695   28319 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 11:53:51.850828   28319 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 11:53:51.850834   28319 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 11:53:51.850894   28319 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1675 bytes)
	I0601 11:53:51.851013   28319 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220601114806-16804 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220601114806-16804]
	I0601 11:53:51.901708   28319 provision.go:172] copyRemoteCerts
	I0601 11:53:51.901767   28319 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 11:53:51.901818   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:51.975877   28319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59947 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601114806-16804/id_rsa Username:docker}
	I0601 11:53:52.060009   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0601 11:53:52.077110   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 11:53:52.093871   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0601 11:53:52.110974   28319 provision.go:86] duration metric: configureAuth took 334.623818ms
	I0601 11:53:52.110987   28319 ubuntu.go:193] setting minikube options for container-runtime
	I0601 11:53:52.111171   28319 config.go:178] Loaded profile config "old-k8s-version-20220601114806-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0601 11:53:52.111232   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:52.184299   28319 main.go:134] libmachine: Using SSH client type: native
	I0601 11:53:52.184438   28319 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 59947 <nil> <nil>}
	I0601 11:53:52.184448   28319 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 11:53:52.302847   28319 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 11:53:52.302863   28319 ubuntu.go:71] root file system type: overlay
	I0601 11:53:52.303018   28319 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 11:53:52.303102   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:52.376389   28319 main.go:134] libmachine: Using SSH client type: native
	I0601 11:53:52.376552   28319 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 59947 <nil> <nil>}
	I0601 11:53:52.376603   28319 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 11:53:52.502277   28319 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 11:53:52.502373   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:52.575586   28319 main.go:134] libmachine: Using SSH client type: native
	I0601 11:53:52.575726   28319 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 59947 <nil> <nil>}
	I0601 11:53:52.575739   28319 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 11:53:52.696095   28319 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:53:52.696111   28319 machine.go:91] provisioned docker machine in 1.317750791s
	I0601 11:53:52.696121   28319 start.go:306] post-start starting for "old-k8s-version-20220601114806-16804" (driver="docker")
	I0601 11:53:52.696125   28319 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 11:53:52.696189   28319 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 11:53:52.696241   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:52.769932   28319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59947 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601114806-16804/id_rsa Username:docker}
	I0601 11:53:52.855461   28319 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 11:53:52.859028   28319 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 11:53:52.859043   28319 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 11:53:52.859052   28319 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 11:53:52.859056   28319 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 11:53:52.859064   28319 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 11:53:52.859169   28319 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 11:53:52.859314   28319 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem -> 168042.pem in /etc/ssl/certs
	I0601 11:53:52.859492   28319 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 11:53:52.866875   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /etc/ssl/certs/168042.pem (1708 bytes)
	I0601 11:53:52.884313   28319 start.go:309] post-start completed in 188.184945ms
	I0601 11:53:52.884426   28319 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:53:52.884507   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:52.959492   28319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59947 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601114806-16804/id_rsa Username:docker}
	I0601 11:53:53.043087   28319 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:53:53.047543   28319 fix.go:57] fixHost completed within 2.366693794s
	I0601 11:53:53.047555   28319 start.go:81] releasing machines lock for "old-k8s-version-20220601114806-16804", held for 2.366727273s
	I0601 11:53:53.047629   28319 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220601114806-16804
	I0601 11:53:53.121099   28319 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 11:53:53.121221   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:53.121364   28319 ssh_runner.go:195] Run: systemctl --version
	I0601 11:53:53.121966   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:53.202586   28319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59947 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601114806-16804/id_rsa Username:docker}
	I0601 11:53:53.205983   28319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59947 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601114806-16804/id_rsa Username:docker}
	I0601 11:53:53.287975   28319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 11:53:53.422168   28319 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 11:53:53.432821   28319 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 11:53:53.432877   28319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 11:53:53.443234   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 11:53:53.456386   28319 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 11:53:53.525203   28319 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 11:53:53.595305   28319 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 11:53:53.605613   28319 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 11:53:53.677054   28319 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 11:53:53.687222   28319 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 11:53:53.721998   28319 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 11:53:53.799095   28319 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.16 ...
	I0601 11:53:53.799216   28319 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220601114806-16804 dig +short host.docker.internal
	I0601 11:53:53.940925   28319 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 11:53:53.941045   28319 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 11:53:53.945523   28319 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:53:53.955094   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:54.028140   28319 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 11:53:54.028206   28319 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 11:53:54.058427   28319 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0601 11:53:54.058444   28319 docker.go:541] Images already preloaded, skipping extraction
	I0601 11:53:54.058545   28319 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 11:53:54.088697   28319 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0601 11:53:54.088719   28319 cache_images.go:84] Images are preloaded, skipping loading
	I0601 11:53:54.088807   28319 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 11:53:54.166463   28319 cni.go:95] Creating CNI manager for ""
	I0601 11:53:54.166476   28319 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:53:54.166488   28319 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 11:53:54.166502   28319 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220601114806-16804 NodeName:old-k8s-version-20220601114806-16804 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 11:53:54.166740   28319 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220601114806-16804"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220601114806-16804
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.49.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 11:53:54.166870   28319 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220601114806-16804 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601114806-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 11:53:54.166970   28319 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0601 11:53:54.175057   28319 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 11:53:54.175168   28319 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 11:53:54.182581   28319 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0601 11:53:54.195344   28319 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 11:53:54.209271   28319 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0601 11:53:54.222455   28319 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 11:53:54.226242   28319 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:53:54.235793   28319 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804 for IP: 192.168.49.2
	I0601 11:53:54.236026   28319 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 11:53:54.236076   28319 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 11:53:54.236166   28319 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/client.key
	I0601 11:53:54.236237   28319 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/apiserver.key.dd3b5fb2
	I0601 11:53:54.236290   28319 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/proxy-client.key
	I0601 11:53:54.236516   28319 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem (1338 bytes)
	W0601 11:53:54.236567   28319 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804_empty.pem, impossibly tiny 0 bytes
	I0601 11:53:54.236582   28319 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1675 bytes)
	I0601 11:53:54.236627   28319 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 11:53:54.236663   28319 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 11:53:54.236693   28319 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1675 bytes)
	I0601 11:53:54.236758   28319 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem (1708 bytes)
	I0601 11:53:54.237319   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 11:53:54.255312   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0601 11:53:54.273877   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 11:53:54.292370   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 11:53:54.309832   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 11:53:54.326977   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 11:53:54.344196   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 11:53:54.362336   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0601 11:53:54.379964   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /usr/share/ca-certificates/168042.pem (1708 bytes)
	I0601 11:53:54.397530   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 11:53:54.417711   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem --> /usr/share/ca-certificates/16804.pem (1338 bytes)
	I0601 11:53:54.437491   28319 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 11:53:54.450542   28319 ssh_runner.go:195] Run: openssl version
	I0601 11:53:54.456042   28319 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 11:53:54.464269   28319 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:53:54.468369   28319 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:53:54.468417   28319 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:53:54.473721   28319 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 11:53:54.481064   28319 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16804.pem && ln -fs /usr/share/ca-certificates/16804.pem /etc/ssl/certs/16804.pem"
	I0601 11:53:54.489014   28319 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16804.pem
	I0601 11:53:54.493352   28319 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 18:01 /usr/share/ca-certificates/16804.pem
	I0601 11:53:54.493405   28319 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16804.pem
	I0601 11:53:54.498751   28319 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16804.pem /etc/ssl/certs/51391683.0"
	I0601 11:53:54.506172   28319 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168042.pem && ln -fs /usr/share/ca-certificates/168042.pem /etc/ssl/certs/168042.pem"
	I0601 11:53:54.514267   28319 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168042.pem
	I0601 11:53:54.518553   28319 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 18:01 /usr/share/ca-certificates/168042.pem
	I0601 11:53:54.518598   28319 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168042.pem
	I0601 11:53:54.523963   28319 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168042.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 11:53:54.531759   28319 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220601114806-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601114806-16804 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fa
lse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:53:54.531914   28319 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 11:53:54.560485   28319 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 11:53:54.568453   28319 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 11:53:54.568470   28319 kubeadm.go:626] restartCluster start
	I0601 11:53:54.568526   28319 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 11:53:54.576181   28319 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:54.576234   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:54.648876   28319 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220601114806-16804" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:53:54.649065   28319 kubeconfig.go:127] "old-k8s-version-20220601114806-16804" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 11:53:54.649419   28319 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk924f4ba24fa74a0cb052299e0cc4e825b209a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:53:54.650792   28319 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 11:53:54.658693   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:54.658754   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:54.667668   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:54.867864   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:54.868016   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:53:52.915439   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:53:54.916282   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:53:57.416935   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	W0601 11:53:54.878565   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:55.067861   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:55.068061   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:55.078749   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:55.267872   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:55.267970   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:55.277798   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:55.467808   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:55.468001   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:55.478316   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:55.668830   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:55.668990   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:55.679581   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:55.867820   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:55.867886   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:55.877012   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:56.067800   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:56.067905   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:56.078888   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:56.268000   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:56.268155   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:56.280256   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:56.469870   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:56.470054   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:56.480670   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:56.668044   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:56.668248   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:56.678758   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:56.869784   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:56.870011   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:56.881309   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:57.068003   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:57.068108   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:57.078632   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:57.268862   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:57.269009   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:57.279785   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:57.467744   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:57.467859   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:57.476668   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:57.669778   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:57.669940   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:57.680383   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:57.680392   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:57.680428   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:57.688734   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:57.688748   28319 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 11:53:57.688756   28319 kubeadm.go:1092] stopping kube-system containers ...
	I0601 11:53:57.688806   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 11:53:57.716946   28319 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 11:53:57.727312   28319 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:53:57.734908   28319 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5747 Jun  1 18:50 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5783 Jun  1 18:50 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5931 Jun  1 18:50 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5735 Jun  1 18:50 /etc/kubernetes/scheduler.conf
	
	I0601 11:53:57.734963   28319 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0601 11:53:57.742318   28319 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0601 11:53:57.749223   28319 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0601 11:53:57.756324   28319 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0601 11:53:57.763812   28319 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:53:57.771443   28319 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 11:53:57.771471   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:53:57.824342   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:53:58.674608   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:53:58.883641   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:53:58.947348   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:53:59.001013   28319 api_server.go:51] waiting for apiserver process to appear ...
	I0601 11:53:59.001108   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:53:59.510767   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:53:59.916784   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:01.917518   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:00.009647   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:00.509747   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:01.010150   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:01.509684   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:02.010421   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:02.509629   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:03.010849   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:03.509597   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:04.010617   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:04.509864   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:04.417417   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:06.915839   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:05.009626   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:05.510122   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:06.011243   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:06.509597   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:07.010075   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:07.510735   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:08.009752   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:08.510521   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:09.011821   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:09.509668   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:09.416812   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:11.916325   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:10.009948   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:10.510847   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:11.009616   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:11.511800   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:12.011078   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:12.509781   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:13.010426   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:13.511504   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:14.009773   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:14.511892   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:13.917075   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:16.417004   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:15.009733   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:15.509887   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:16.009785   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:16.509980   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:17.010719   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:17.510131   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:18.010694   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:18.509925   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:19.009913   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:19.509819   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:18.418120   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:20.915536   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:20.010244   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:20.511718   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:21.009981   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:21.511674   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:22.010072   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:22.510782   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:23.010358   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:23.510119   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:24.010784   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:24.510053   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:22.915622   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:25.416214   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:25.010176   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:25.509875   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:26.010334   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:26.509928   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:27.011901   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:27.510111   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:28.010803   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:28.511923   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:29.009812   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:29.510817   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:27.917006   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:29.917023   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:32.412894   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:30.009917   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:30.509902   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:31.009955   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:31.510015   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:32.009897   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:32.511855   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:33.010119   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:33.509814   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:34.009927   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:34.510142   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:34.413724   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:36.916224   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:35.009839   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:35.510508   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:36.011637   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:36.510196   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:37.011880   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:37.510089   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:38.009692   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:38.511810   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:39.011487   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:39.510121   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:39.413982   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:41.915154   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:40.009747   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:40.510936   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:41.009982   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:41.511810   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:42.009813   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:42.509738   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:43.009671   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:43.510070   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:44.010000   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:44.510019   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:43.915607   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:46.415307   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:45.011452   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:45.510016   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:46.011805   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:46.511096   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:47.010260   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:47.511556   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:48.011623   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:48.510043   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:49.010213   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:49.511714   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:48.918361   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:51.415273   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:50.010714   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:50.510086   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:51.010435   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:51.509903   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:52.011713   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:52.511717   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:53.010672   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:53.510554   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:54.011736   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:54.510455   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:53.415382   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:55.915589   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:55.009677   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:55.511743   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:56.010494   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:56.510375   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:57.009595   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:57.510546   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:58.009763   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:58.510692   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:59.010031   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:54:59.041359   28319 logs.go:274] 0 containers: []
	W0601 11:54:59.041374   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:54:59.041433   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:54:59.070260   28319 logs.go:274] 0 containers: []
	W0601 11:54:59.070272   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:54:59.070335   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:54:59.100026   28319 logs.go:274] 0 containers: []
	W0601 11:54:59.100038   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:54:59.100092   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:54:59.130410   28319 logs.go:274] 0 containers: []
	W0601 11:54:59.130422   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:54:59.130489   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:54:59.161102   28319 logs.go:274] 0 containers: []
	W0601 11:54:59.161116   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:54:59.161174   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:54:59.190924   28319 logs.go:274] 0 containers: []
	W0601 11:54:59.190935   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:54:59.190999   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:54:59.220657   28319 logs.go:274] 0 containers: []
	W0601 11:54:59.220668   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:54:59.220727   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:54:59.249159   28319 logs.go:274] 0 containers: []
	W0601 11:54:59.249172   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:54:59.249178   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:54:59.249185   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:54:59.261384   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:54:59.261396   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:54:59.314775   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:54:59.314790   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:54:59.314813   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:54:59.327098   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:54:59.327111   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:54:57.916340   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:00.413721   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:01.380143   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053042018s)
	I0601 11:55:01.380273   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:01.380280   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:03.922143   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:04.010905   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:04.040989   28319 logs.go:274] 0 containers: []
	W0601 11:55:04.041000   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:04.041053   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:04.068936   28319 logs.go:274] 0 containers: []
	W0601 11:55:04.068948   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:04.069005   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:04.097959   28319 logs.go:274] 0 containers: []
	W0601 11:55:04.097971   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:04.098033   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:04.126721   28319 logs.go:274] 0 containers: []
	W0601 11:55:04.126734   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:04.126798   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:04.159225   28319 logs.go:274] 0 containers: []
	W0601 11:55:04.159236   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:04.159294   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:04.190775   28319 logs.go:274] 0 containers: []
	W0601 11:55:04.190816   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:04.190876   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:04.221251   28319 logs.go:274] 0 containers: []
	W0601 11:55:04.221264   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:04.221323   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:04.252908   28319 logs.go:274] 0 containers: []
	W0601 11:55:04.252955   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:04.252962   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:04.252973   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:04.295721   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:55:04.295735   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:55:04.307860   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:55:04.307873   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:55:04.362481   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:55:04.362494   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:55:04.362502   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:55:04.374612   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:04.374623   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:02.916832   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:05.414833   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:07.416272   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:06.432720   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058108483s)
	I0601 11:55:08.935099   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:09.011533   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:09.042307   28319 logs.go:274] 0 containers: []
	W0601 11:55:09.042320   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:09.042373   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:09.071674   28319 logs.go:274] 0 containers: []
	W0601 11:55:09.071686   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:09.071752   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:09.100500   28319 logs.go:274] 0 containers: []
	W0601 11:55:09.100516   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:09.100572   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:09.129557   28319 logs.go:274] 0 containers: []
	W0601 11:55:09.129568   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:09.129632   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:09.159131   28319 logs.go:274] 0 containers: []
	W0601 11:55:09.159144   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:09.159198   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:09.188211   28319 logs.go:274] 0 containers: []
	W0601 11:55:09.188224   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:09.188282   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:09.218887   28319 logs.go:274] 0 containers: []
	W0601 11:55:09.218900   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:09.218955   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:09.248189   28319 logs.go:274] 0 containers: []
	W0601 11:55:09.248204   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:09.248212   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:09.248220   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:09.292398   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:55:09.292412   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:55:09.305043   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:55:09.305056   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:55:09.358584   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:55:09.358623   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:55:09.358646   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:55:09.371613   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:09.371625   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:09.914468   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:11.915325   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:11.427594   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055980689s)
	I0601 11:55:13.928572   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:14.011456   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:14.041396   28319 logs.go:274] 0 containers: []
	W0601 11:55:14.041409   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:14.041466   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:14.069221   28319 logs.go:274] 0 containers: []
	W0601 11:55:14.069233   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:14.069300   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:14.098018   28319 logs.go:274] 0 containers: []
	W0601 11:55:14.098031   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:14.098087   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:14.128468   28319 logs.go:274] 0 containers: []
	W0601 11:55:14.128480   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:14.128538   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:14.162047   28319 logs.go:274] 0 containers: []
	W0601 11:55:14.162059   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:14.162114   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:14.195633   28319 logs.go:274] 0 containers: []
	W0601 11:55:14.195647   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:14.195716   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:14.224730   28319 logs.go:274] 0 containers: []
	W0601 11:55:14.224743   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:14.224796   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:14.255413   28319 logs.go:274] 0 containers: []
	W0601 11:55:14.255426   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:14.255449   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:14.255456   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:14.297925   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:55:14.297938   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:55:14.311464   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:55:14.311477   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:55:14.363749   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:55:14.363759   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:55:14.363766   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:55:14.377049   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:14.377063   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:13.916812   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:16.413640   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:16.431836   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054784141s)
	I0601 11:55:18.932093   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:19.009576   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:19.039961   28319 logs.go:274] 0 containers: []
	W0601 11:55:19.039974   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:19.040032   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:19.069166   28319 logs.go:274] 0 containers: []
	W0601 11:55:19.069178   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:19.069234   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:19.097392   28319 logs.go:274] 0 containers: []
	W0601 11:55:19.097405   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:19.097468   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:19.128648   28319 logs.go:274] 0 containers: []
	W0601 11:55:19.128660   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:19.128716   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:19.158222   28319 logs.go:274] 0 containers: []
	W0601 11:55:19.158235   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:19.158294   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:19.188141   28319 logs.go:274] 0 containers: []
	W0601 11:55:19.188155   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:19.188209   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:19.219575   28319 logs.go:274] 0 containers: []
	W0601 11:55:19.219588   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:19.219654   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:19.253005   28319 logs.go:274] 0 containers: []
	W0601 11:55:19.253019   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:19.253026   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:55:19.253035   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:55:19.266133   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:19.266149   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:18.916745   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:21.413196   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:21.320131   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053993397s)
	I0601 11:55:21.320234   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:21.320240   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:21.361727   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:55:21.361740   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:55:21.375163   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:55:21.375177   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:55:21.432802   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:55:23.934258   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:24.009921   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:24.040408   28319 logs.go:274] 0 containers: []
	W0601 11:55:24.040420   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:24.040476   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:24.068603   28319 logs.go:274] 0 containers: []
	W0601 11:55:24.068615   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:24.068673   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:24.097572   28319 logs.go:274] 0 containers: []
	W0601 11:55:24.097584   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:24.097641   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:24.127008   28319 logs.go:274] 0 containers: []
	W0601 11:55:24.127020   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:24.127083   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:24.157041   28319 logs.go:274] 0 containers: []
	W0601 11:55:24.157054   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:24.157117   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:24.186748   28319 logs.go:274] 0 containers: []
	W0601 11:55:24.186761   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:24.186819   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:24.215933   28319 logs.go:274] 0 containers: []
	W0601 11:55:24.215946   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:24.216013   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:24.247816   28319 logs.go:274] 0 containers: []
	W0601 11:55:24.247829   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:24.247836   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:55:24.247843   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:55:24.260281   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:24.260293   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:23.414008   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:25.913520   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:26.315423   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055142929s)
	I0601 11:55:26.315530   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:26.315537   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:26.354821   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:55:26.354835   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:55:26.369903   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:55:26.369926   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:55:26.426327   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:55:28.926931   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:29.009389   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:29.040058   28319 logs.go:274] 0 containers: []
	W0601 11:55:29.040071   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:29.040129   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:29.068341   28319 logs.go:274] 0 containers: []
	W0601 11:55:29.068353   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:29.068410   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:29.098806   28319 logs.go:274] 0 containers: []
	W0601 11:55:29.098817   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:29.098876   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:29.128428   28319 logs.go:274] 0 containers: []
	W0601 11:55:29.128462   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:29.128520   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:29.158686   28319 logs.go:274] 0 containers: []
	W0601 11:55:29.158725   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:29.158785   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:29.188284   28319 logs.go:274] 0 containers: []
	W0601 11:55:29.188295   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:29.188348   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:29.217778   28319 logs.go:274] 0 containers: []
	W0601 11:55:29.217791   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:29.217855   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:29.247459   28319 logs.go:274] 0 containers: []
	W0601 11:55:29.247472   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:29.247479   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:29.247485   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:29.290765   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:55:29.290780   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:55:29.302626   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:55:29.302638   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:55:29.356128   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:55:29.356140   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:55:29.356147   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:55:29.369506   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:29.369522   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:27.915099   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:30.413362   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:31.427130   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057620396s)
	I0601 11:55:33.928625   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:34.009592   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:34.039227   28319 logs.go:274] 0 containers: []
	W0601 11:55:34.039241   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:34.039301   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:34.068316   28319 logs.go:274] 0 containers: []
	W0601 11:55:34.068329   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:34.068388   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:34.097349   28319 logs.go:274] 0 containers: []
	W0601 11:55:34.097360   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:34.097414   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:34.127402   28319 logs.go:274] 0 containers: []
	W0601 11:55:34.127415   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:34.127473   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:34.158010   28319 logs.go:274] 0 containers: []
	W0601 11:55:34.158023   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:34.158091   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:34.189587   28319 logs.go:274] 0 containers: []
	W0601 11:55:34.189604   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:34.189668   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:34.219589   28319 logs.go:274] 0 containers: []
	W0601 11:55:34.219601   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:34.219659   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:34.251097   28319 logs.go:274] 0 containers: []
	W0601 11:55:34.251111   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:34.251118   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:34.251125   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:34.294366   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:55:34.294381   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:55:34.306716   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:55:34.306749   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:55:34.365768   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:55:34.365779   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:55:34.365789   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:55:34.378842   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:34.378855   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:32.914832   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:35.414524   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:37.415451   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:36.434298   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055455813s)
	I0601 11:55:38.936699   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:39.009065   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:39.038697   28319 logs.go:274] 0 containers: []
	W0601 11:55:39.038710   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:39.038765   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:39.067921   28319 logs.go:274] 0 containers: []
	W0601 11:55:39.067933   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:39.067992   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:39.098440   28319 logs.go:274] 0 containers: []
	W0601 11:55:39.098452   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:39.098516   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:39.127326   28319 logs.go:274] 0 containers: []
	W0601 11:55:39.127338   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:39.127408   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:39.156250   28319 logs.go:274] 0 containers: []
	W0601 11:55:39.156261   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:39.156319   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:39.185946   28319 logs.go:274] 0 containers: []
	W0601 11:55:39.185958   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:39.186014   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:39.215610   28319 logs.go:274] 0 containers: []
	W0601 11:55:39.215622   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:39.215687   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:39.245933   28319 logs.go:274] 0 containers: []
	W0601 11:55:39.245945   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:39.245952   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:39.245958   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:39.288218   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:55:39.288232   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:55:39.300049   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:55:39.300062   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:55:39.353082   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:55:39.353099   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:55:39.353107   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:55:39.368530   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:39.368544   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:39.916089   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:42.413720   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:41.423732   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055201327s)
	I0601 11:55:43.924128   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:44.009307   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:44.039683   28319 logs.go:274] 0 containers: []
	W0601 11:55:44.039695   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:44.039751   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:44.067842   28319 logs.go:274] 0 containers: []
	W0601 11:55:44.067855   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:44.067913   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:44.097345   28319 logs.go:274] 0 containers: []
	W0601 11:55:44.097361   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:44.097434   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:44.127436   28319 logs.go:274] 0 containers: []
	W0601 11:55:44.127448   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:44.127503   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:44.156091   28319 logs.go:274] 0 containers: []
	W0601 11:55:44.156109   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:44.156164   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:44.185928   28319 logs.go:274] 0 containers: []
	W0601 11:55:44.185961   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:44.186024   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:44.214767   28319 logs.go:274] 0 containers: []
	W0601 11:55:44.214779   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:44.214838   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:44.245949   28319 logs.go:274] 0 containers: []
	W0601 11:55:44.245962   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:44.245968   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:44.245975   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:44.287811   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:55:44.287825   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:55:44.300341   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:55:44.300374   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:55:44.358385   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:55:44.358412   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:55:44.358420   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:55:44.371801   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:44.371813   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:44.415399   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:46.913440   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:46.428143   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056342162s)
	I0601 11:55:48.928399   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:49.009439   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:49.041219   28319 logs.go:274] 0 containers: []
	W0601 11:55:49.041231   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:49.041298   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:49.070249   28319 logs.go:274] 0 containers: []
	W0601 11:55:49.070261   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:49.070314   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:49.099733   28319 logs.go:274] 0 containers: []
	W0601 11:55:49.099745   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:49.099810   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:49.129069   28319 logs.go:274] 0 containers: []
	W0601 11:55:49.129087   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:49.129156   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:49.160580   28319 logs.go:274] 0 containers: []
	W0601 11:55:49.160592   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:49.160649   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:49.191907   28319 logs.go:274] 0 containers: []
	W0601 11:55:49.191927   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:49.192017   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:49.224082   28319 logs.go:274] 0 containers: []
	W0601 11:55:49.224094   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:49.224150   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:49.253092   28319 logs.go:274] 0 containers: []
	W0601 11:55:49.253105   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:49.253112   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:49.253119   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:49.296708   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:55:49.296724   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:55:49.308993   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:55:49.309005   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:55:49.362195   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:55:49.362213   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:55:49.362221   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:55:49.375504   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:49.375515   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:48.916194   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:51.413300   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:51.430612   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055108694s)
	I0601 11:55:53.931474   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:54.010786   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:54.041202   28319 logs.go:274] 0 containers: []
	W0601 11:55:54.041214   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:54.041269   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:54.070844   28319 logs.go:274] 0 containers: []
	W0601 11:55:54.070858   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:54.070913   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:54.100345   28319 logs.go:274] 0 containers: []
	W0601 11:55:54.100358   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:54.100429   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:54.135095   28319 logs.go:274] 0 containers: []
	W0601 11:55:54.135108   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:54.135161   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:54.164057   28319 logs.go:274] 0 containers: []
	W0601 11:55:54.164070   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:54.164163   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:54.194214   28319 logs.go:274] 0 containers: []
	W0601 11:55:54.194226   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:54.194283   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:54.224549   28319 logs.go:274] 0 containers: []
	W0601 11:55:54.224563   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:54.224617   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:54.253713   28319 logs.go:274] 0 containers: []
	W0601 11:55:54.253725   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:54.253732   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:54.253741   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:54.296231   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:55:54.296245   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:55:54.309155   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:55:54.309170   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:55:54.367180   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:55:54.367192   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:55:54.367202   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:55:54.380905   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:54.380918   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:53.416659   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:55.913796   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:56.441742   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060835682s)
	I0601 11:55:58.942261   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:59.010922   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:59.041572   28319 logs.go:274] 0 containers: []
	W0601 11:55:59.041586   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:59.041646   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:59.071435   28319 logs.go:274] 0 containers: []
	W0601 11:55:59.071447   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:59.071510   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:59.102114   28319 logs.go:274] 0 containers: []
	W0601 11:55:59.102126   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:59.102180   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:59.131205   28319 logs.go:274] 0 containers: []
	W0601 11:55:59.131218   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:59.131290   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:59.161117   28319 logs.go:274] 0 containers: []
	W0601 11:55:59.161144   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:59.161199   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:59.192225   28319 logs.go:274] 0 containers: []
	W0601 11:55:59.192237   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:59.192291   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:59.222459   28319 logs.go:274] 0 containers: []
	W0601 11:55:59.222472   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:59.222526   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:59.252831   28319 logs.go:274] 0 containers: []
	W0601 11:55:59.252844   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:59.252851   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:59.252859   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:58.415467   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:00.915615   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:01.309035   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056190082s)
	I0601 11:56:01.309146   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:01.309153   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:01.351333   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:01.351348   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:01.363658   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:01.363670   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:01.419248   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:01.419262   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:01.419269   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:03.932268   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:04.010915   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:04.041445   28319 logs.go:274] 0 containers: []
	W0601 11:56:04.041457   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:04.041511   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:04.071011   28319 logs.go:274] 0 containers: []
	W0601 11:56:04.071024   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:04.071085   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:04.104002   28319 logs.go:274] 0 containers: []
	W0601 11:56:04.104013   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:04.104077   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:04.134006   28319 logs.go:274] 0 containers: []
	W0601 11:56:04.134019   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:04.134100   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:04.164966   28319 logs.go:274] 0 containers: []
	W0601 11:56:04.164980   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:04.165051   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:04.195574   28319 logs.go:274] 0 containers: []
	W0601 11:56:04.195585   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:04.195641   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:04.226690   28319 logs.go:274] 0 containers: []
	W0601 11:56:04.226702   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:04.226761   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:04.255356   28319 logs.go:274] 0 containers: []
	W0601 11:56:04.255369   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:04.255376   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:04.255397   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:04.299830   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:04.299845   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:04.311638   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:04.311650   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:04.366259   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:04.366299   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:04.366307   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:04.379569   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:04.379580   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:03.413050   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:05.414664   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:07.414801   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:06.441255   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.061688435s)
	I0601 11:56:08.942583   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:09.010888   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:09.041505   28319 logs.go:274] 0 containers: []
	W0601 11:56:09.041516   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:09.041582   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:09.069955   28319 logs.go:274] 0 containers: []
	W0601 11:56:09.069968   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:09.070020   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:09.100291   28319 logs.go:274] 0 containers: []
	W0601 11:56:09.100302   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:09.100355   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:09.128780   28319 logs.go:274] 0 containers: []
	W0601 11:56:09.128791   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:09.128844   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:09.158028   28319 logs.go:274] 0 containers: []
	W0601 11:56:09.158040   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:09.158100   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:09.188003   28319 logs.go:274] 0 containers: []
	W0601 11:56:09.188016   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:09.188071   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:09.217250   28319 logs.go:274] 0 containers: []
	W0601 11:56:09.217263   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:09.217335   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:09.247404   28319 logs.go:274] 0 containers: []
	W0601 11:56:09.247416   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:09.247423   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:09.247430   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:09.291646   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:09.291660   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:09.303726   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:09.303737   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:09.359404   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:09.359416   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:09.359423   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:09.372338   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:09.372352   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:09.914692   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:11.914813   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:11.438025   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.065685968s)
	I0601 11:56:13.938356   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:14.010482   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:14.042655   28319 logs.go:274] 0 containers: []
	W0601 11:56:14.042666   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:14.042721   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:14.073307   28319 logs.go:274] 0 containers: []
	W0601 11:56:14.073335   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:14.073392   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:14.103025   28319 logs.go:274] 0 containers: []
	W0601 11:56:14.103036   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:14.103091   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:14.132511   28319 logs.go:274] 0 containers: []
	W0601 11:56:14.132524   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:14.132583   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:14.162337   28319 logs.go:274] 0 containers: []
	W0601 11:56:14.162349   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:14.162404   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:14.192882   28319 logs.go:274] 0 containers: []
	W0601 11:56:14.192896   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:14.192952   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:14.222438   28319 logs.go:274] 0 containers: []
	W0601 11:56:14.222451   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:14.222506   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:14.252850   28319 logs.go:274] 0 containers: []
	W0601 11:56:14.252863   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:14.252871   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:14.252878   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:14.265274   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:14.265300   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:14.413751   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:16.913656   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:16.319655   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.0543632s)
	I0601 11:56:16.319773   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:16.319781   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:16.360376   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:16.360390   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:16.373260   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:16.373293   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:16.428799   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:18.930318   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:19.010706   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:19.041493   28319 logs.go:274] 0 containers: []
	W0601 11:56:19.041505   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:19.041566   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:19.071367   28319 logs.go:274] 0 containers: []
	W0601 11:56:19.071377   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:19.071438   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:19.102204   28319 logs.go:274] 0 containers: []
	W0601 11:56:19.102217   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:19.102273   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:19.134887   28319 logs.go:274] 0 containers: []
	W0601 11:56:19.134899   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:19.134960   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:19.165401   28319 logs.go:274] 0 containers: []
	W0601 11:56:19.165414   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:19.165481   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:19.199809   28319 logs.go:274] 0 containers: []
	W0601 11:56:19.199820   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:19.199917   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:19.231653   28319 logs.go:274] 0 containers: []
	W0601 11:56:19.231665   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:19.231722   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:19.261391   28319 logs.go:274] 0 containers: []
	W0601 11:56:19.261403   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:19.261410   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:19.261416   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:19.304944   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:19.304958   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:19.316813   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:19.316825   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:19.372616   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:19.372627   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:19.372633   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:19.385307   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:19.385318   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:18.913944   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:20.915285   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:21.446084   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060778195s)
	I0601 11:56:23.946972   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:24.009400   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:24.039656   28319 logs.go:274] 0 containers: []
	W0601 11:56:24.039669   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:24.039728   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:24.070582   28319 logs.go:274] 0 containers: []
	W0601 11:56:24.070594   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:24.070651   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:24.100855   28319 logs.go:274] 0 containers: []
	W0601 11:56:24.100867   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:24.100920   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:24.131557   28319 logs.go:274] 0 containers: []
	W0601 11:56:24.131567   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:24.131627   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:24.161584   28319 logs.go:274] 0 containers: []
	W0601 11:56:24.161596   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:24.161652   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:24.191550   28319 logs.go:274] 0 containers: []
	W0601 11:56:24.191562   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:24.191632   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:24.223779   28319 logs.go:274] 0 containers: []
	W0601 11:56:24.223792   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:24.223849   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:24.254796   28319 logs.go:274] 0 containers: []
	W0601 11:56:24.254809   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:24.254816   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:24.254823   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:24.299122   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:24.299137   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:24.311260   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:24.311276   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:24.366958   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:24.366989   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:24.366995   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:24.380157   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:24.380171   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:23.412235   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:25.414566   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:26.434527   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054347865s)
	I0601 11:56:28.934821   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:29.010609   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:29.042687   28319 logs.go:274] 0 containers: []
	W0601 11:56:29.042700   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:29.042757   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:29.071650   28319 logs.go:274] 0 containers: []
	W0601 11:56:29.071663   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:29.071720   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:29.100444   28319 logs.go:274] 0 containers: []
	W0601 11:56:29.100456   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:29.100516   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:29.130300   28319 logs.go:274] 0 containers: []
	W0601 11:56:29.130313   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:29.130370   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:29.160069   28319 logs.go:274] 0 containers: []
	W0601 11:56:29.160081   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:29.160136   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:29.189354   28319 logs.go:274] 0 containers: []
	W0601 11:56:29.189366   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:29.189420   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:29.218871   28319 logs.go:274] 0 containers: []
	W0601 11:56:29.218883   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:29.218938   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:29.249986   28319 logs.go:274] 0 containers: []
	W0601 11:56:29.249998   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:29.250005   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:29.250011   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:29.289956   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:29.289969   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:29.301893   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:29.301922   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:29.354235   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:29.354260   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:29.354288   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:29.367183   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:29.367196   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:27.915544   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:29.916311   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:32.413455   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:31.425251   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058068657s)
	I0601 11:56:33.925564   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:34.008864   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:34.040390   28319 logs.go:274] 0 containers: []
	W0601 11:56:34.040403   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:34.040457   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:34.070772   28319 logs.go:274] 0 containers: []
	W0601 11:56:34.070785   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:34.070845   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:34.100100   28319 logs.go:274] 0 containers: []
	W0601 11:56:34.100115   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:34.100189   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:34.131817   28319 logs.go:274] 0 containers: []
	W0601 11:56:34.131832   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:34.131891   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:34.165170   28319 logs.go:274] 0 containers: []
	W0601 11:56:34.165182   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:34.165240   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:34.196333   28319 logs.go:274] 0 containers: []
	W0601 11:56:34.196346   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:34.196401   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:34.227456   28319 logs.go:274] 0 containers: []
	W0601 11:56:34.227468   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:34.227522   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:34.255880   28319 logs.go:274] 0 containers: []
	W0601 11:56:34.255896   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:34.255905   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:34.255911   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:34.415552   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:36.913902   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:36.313109   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057210284s)
	I0601 11:56:36.313220   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:36.313228   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:36.355277   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:36.355295   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:36.367936   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:36.367949   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:36.427265   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:36.427277   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:36.427284   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:38.944432   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:39.010467   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:39.042318   28319 logs.go:274] 0 containers: []
	W0601 11:56:39.042330   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:39.042389   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:39.071800   28319 logs.go:274] 0 containers: []
	W0601 11:56:39.071811   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:39.071865   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:39.102235   28319 logs.go:274] 0 containers: []
	W0601 11:56:39.102247   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:39.102304   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:39.133642   28319 logs.go:274] 0 containers: []
	W0601 11:56:39.133655   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:39.133711   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:39.162183   28319 logs.go:274] 0 containers: []
	W0601 11:56:39.162215   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:39.162274   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:39.192299   28319 logs.go:274] 0 containers: []
	W0601 11:56:39.192332   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:39.192402   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:39.224060   28319 logs.go:274] 0 containers: []
	W0601 11:56:39.224073   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:39.224128   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:39.254137   28319 logs.go:274] 0 containers: []
	W0601 11:56:39.254151   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:39.254157   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:39.254164   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:39.296037   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:39.296050   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:39.307439   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:39.307450   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:39.365141   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:39.365151   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:39.365165   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:39.378713   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:39.378727   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:38.915179   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:40.909076   28155 pod_ready.go:81] duration metric: took 4m0.006566969s waiting for pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace to be "Ready" ...
	E0601 11:56:40.909095   28155 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace to be "Ready" (will not retry!)
	I0601 11:56:40.909107   28155 pod_ready.go:38] duration metric: took 4m13.097463038s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:56:40.909179   28155 kubeadm.go:630] restartCluster took 4m23.066207931s
	W0601 11:56:40.909259   28155 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0601 11:56:40.909275   28155 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 11:56:41.442670   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.063954081s)
	I0601 11:56:43.943321   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:44.009335   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:44.039950   28319 logs.go:274] 0 containers: []
	W0601 11:56:44.039961   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:44.040015   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:44.069074   28319 logs.go:274] 0 containers: []
	W0601 11:56:44.069087   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:44.069170   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:44.098171   28319 logs.go:274] 0 containers: []
	W0601 11:56:44.098184   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:44.098242   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:44.127158   28319 logs.go:274] 0 containers: []
	W0601 11:56:44.127170   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:44.127231   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:44.158530   28319 logs.go:274] 0 containers: []
	W0601 11:56:44.158543   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:44.158600   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:44.187857   28319 logs.go:274] 0 containers: []
	W0601 11:56:44.187869   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:44.187927   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:44.217215   28319 logs.go:274] 0 containers: []
	W0601 11:56:44.217228   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:44.217282   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:44.251676   28319 logs.go:274] 0 containers: []
	W0601 11:56:44.251689   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:44.251697   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:44.251703   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:44.296360   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:44.296377   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:44.308411   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:44.308422   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:44.363146   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:44.363158   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:44.363165   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:44.375992   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:44.376005   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:46.429887   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053894829s)
	I0601 11:56:48.930355   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:49.010017   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:49.040810   28319 logs.go:274] 0 containers: []
	W0601 11:56:49.040823   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:49.040878   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:49.069024   28319 logs.go:274] 0 containers: []
	W0601 11:56:49.069037   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:49.069090   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:49.100505   28319 logs.go:274] 0 containers: []
	W0601 11:56:49.100519   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:49.100582   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:49.133348   28319 logs.go:274] 0 containers: []
	W0601 11:56:49.133361   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:49.133416   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:49.162816   28319 logs.go:274] 0 containers: []
	W0601 11:56:49.162828   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:49.162886   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:49.194148   28319 logs.go:274] 0 containers: []
	W0601 11:56:49.194160   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:49.194216   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:49.223792   28319 logs.go:274] 0 containers: []
	W0601 11:56:49.223804   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:49.223861   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:49.254312   28319 logs.go:274] 0 containers: []
	W0601 11:56:49.254325   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:49.254332   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:49.254339   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:49.297715   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:49.297732   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:49.309499   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:49.309514   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:49.361498   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:49.361512   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:49.361519   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:49.374038   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:49.374050   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:51.428011   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053974706s)
	I0601 11:56:53.928463   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:54.010281   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:54.041861   28319 logs.go:274] 0 containers: []
	W0601 11:56:54.041873   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:54.041925   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:54.070132   28319 logs.go:274] 0 containers: []
	W0601 11:56:54.070144   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:54.070203   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:54.100461   28319 logs.go:274] 0 containers: []
	W0601 11:56:54.100473   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:54.100529   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:54.129880   28319 logs.go:274] 0 containers: []
	W0601 11:56:54.129891   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:54.129953   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:54.158973   28319 logs.go:274] 0 containers: []
	W0601 11:56:54.158987   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:54.159041   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:54.189002   28319 logs.go:274] 0 containers: []
	W0601 11:56:54.189013   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:54.189069   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:54.219965   28319 logs.go:274] 0 containers: []
	W0601 11:56:54.219978   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:54.220032   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:54.250636   28319 logs.go:274] 0 containers: []
	W0601 11:56:54.250647   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:54.250655   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:54.250664   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:54.294346   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:54.294360   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:54.306971   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:54.306984   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:54.362857   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:54.362870   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:54.362878   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:54.376322   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:54.376337   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:56.432087   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055762931s)
	I0601 11:56:58.933231   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:59.010295   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:59.041877   28319 logs.go:274] 0 containers: []
	W0601 11:56:59.041889   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:59.041943   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:59.070763   28319 logs.go:274] 0 containers: []
	W0601 11:56:59.070781   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:59.070837   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:59.100715   28319 logs.go:274] 0 containers: []
	W0601 11:56:59.100727   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:59.100786   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:59.130622   28319 logs.go:274] 0 containers: []
	W0601 11:56:59.130634   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:59.130689   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:59.161860   28319 logs.go:274] 0 containers: []
	W0601 11:56:59.161873   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:59.161927   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:59.190790   28319 logs.go:274] 0 containers: []
	W0601 11:56:59.190804   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:59.190859   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:59.219375   28319 logs.go:274] 0 containers: []
	W0601 11:56:59.219387   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:59.219442   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:59.249583   28319 logs.go:274] 0 containers: []
	W0601 11:56:59.249596   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:59.249604   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:59.249611   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:59.291437   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:59.291452   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:59.303657   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:59.303668   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:59.357073   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:59.357084   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:59.357091   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:59.369377   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:59.369390   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:01.425646   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056269873s)
	I0601 11:57:03.925844   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:04.010201   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:57:04.041233   28319 logs.go:274] 0 containers: []
	W0601 11:57:04.041245   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:57:04.041322   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:57:04.070072   28319 logs.go:274] 0 containers: []
	W0601 11:57:04.070086   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:57:04.070153   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:57:04.100335   28319 logs.go:274] 0 containers: []
	W0601 11:57:04.100354   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:57:04.100437   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:57:04.130281   28319 logs.go:274] 0 containers: []
	W0601 11:57:04.130293   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:57:04.130352   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:57:04.167795   28319 logs.go:274] 0 containers: []
	W0601 11:57:04.167807   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:57:04.167928   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:57:04.197871   28319 logs.go:274] 0 containers: []
	W0601 11:57:04.197884   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:57:04.197940   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:57:04.228277   28319 logs.go:274] 0 containers: []
	W0601 11:57:04.228288   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:57:04.228345   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:57:04.258092   28319 logs.go:274] 0 containers: []
	W0601 11:57:04.258104   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:57:04.258111   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:57:04.258118   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:57:04.311843   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:57:04.311868   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:57:04.311874   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:57:04.324627   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:57:04.324640   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:06.380068   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055441139s)
	I0601 11:57:06.380181   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:57:06.380188   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:57:06.423000   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:57:06.423017   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:57:08.935789   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:09.008127   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:57:09.038879   28319 logs.go:274] 0 containers: []
	W0601 11:57:09.038891   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:57:09.038947   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:57:09.068291   28319 logs.go:274] 0 containers: []
	W0601 11:57:09.068306   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:57:09.068360   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:57:09.096958   28319 logs.go:274] 0 containers: []
	W0601 11:57:09.096969   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:57:09.097039   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:57:09.126729   28319 logs.go:274] 0 containers: []
	W0601 11:57:09.126741   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:57:09.126798   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:57:09.156004   28319 logs.go:274] 0 containers: []
	W0601 11:57:09.156015   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:57:09.156095   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:57:09.184629   28319 logs.go:274] 0 containers: []
	W0601 11:57:09.184642   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:57:09.184699   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:57:09.214073   28319 logs.go:274] 0 containers: []
	W0601 11:57:09.214085   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:57:09.214146   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:57:09.243550   28319 logs.go:274] 0 containers: []
	W0601 11:57:09.243562   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:57:09.243569   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:57:09.243576   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:57:09.286219   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:57:09.286233   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:57:09.298176   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:57:09.298188   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:57:09.352783   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:57:09.352796   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:57:09.352805   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:57:09.366089   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:57:09.366102   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:11.424220   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05813202s)
	I0601 11:57:13.925524   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:14.010071   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:57:14.041352   28319 logs.go:274] 0 containers: []
	W0601 11:57:14.041365   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:57:14.041423   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:57:14.071470   28319 logs.go:274] 0 containers: []
	W0601 11:57:14.071482   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:57:14.071539   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:57:14.100965   28319 logs.go:274] 0 containers: []
	W0601 11:57:14.100977   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:57:14.101111   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:57:14.129799   28319 logs.go:274] 0 containers: []
	W0601 11:57:14.129810   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:57:14.129863   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:57:14.159841   28319 logs.go:274] 0 containers: []
	W0601 11:57:14.159852   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:57:14.159908   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:57:14.190255   28319 logs.go:274] 0 containers: []
	W0601 11:57:14.190270   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:57:14.190341   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:57:14.219539   28319 logs.go:274] 0 containers: []
	W0601 11:57:14.219552   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:57:14.219607   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:57:14.247896   28319 logs.go:274] 0 containers: []
	W0601 11:57:14.247930   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:57:14.247937   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:57:14.247945   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:57:14.291044   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:57:14.291058   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:57:14.304512   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:57:14.304523   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:57:14.356717   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:57:14.356731   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:57:14.356738   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:57:14.368729   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:57:14.368740   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:19.339269   28155 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (38.430441915s)
	I0601 11:57:19.339331   28155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:57:19.351465   28155 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:57:19.359858   28155 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:57:19.359933   28155 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:57:19.369371   28155 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:57:19.369402   28155 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:57:16.428777   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060048525s)
	I0601 11:57:18.929035   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:19.008006   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:57:19.040365   28319 logs.go:274] 0 containers: []
	W0601 11:57:19.040380   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:57:19.040440   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:57:19.073546   28319 logs.go:274] 0 containers: []
	W0601 11:57:19.073561   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:57:19.073626   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:57:19.108192   28319 logs.go:274] 0 containers: []
	W0601 11:57:19.108212   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:57:19.108276   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:57:19.142430   28319 logs.go:274] 0 containers: []
	W0601 11:57:19.142443   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:57:19.142538   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:57:19.175636   28319 logs.go:274] 0 containers: []
	W0601 11:57:19.175650   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:57:19.175719   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:57:19.208195   28319 logs.go:274] 0 containers: []
	W0601 11:57:19.208209   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:57:19.208267   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:57:19.240564   28319 logs.go:274] 0 containers: []
	W0601 11:57:19.240576   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:57:19.240633   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:57:19.273419   28319 logs.go:274] 0 containers: []
	W0601 11:57:19.273432   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:57:19.273439   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:57:19.273446   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:57:19.331449   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:57:19.331463   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:57:19.331471   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:57:19.346208   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:57:19.346222   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:19.860439   28155 out.go:204]   - Generating certificates and keys ...
	I0601 11:57:20.661769   28155 out.go:204]   - Booting up control plane ...
	I0601 11:57:21.407126   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060916068s)
	I0601 11:57:21.407235   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:57:21.407242   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:57:21.450235   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:57:21.450250   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:57:23.962515   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:24.007999   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:57:24.046910   28319 logs.go:274] 0 containers: []
	W0601 11:57:24.046922   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:57:24.046977   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:57:24.078502   28319 logs.go:274] 0 containers: []
	W0601 11:57:24.078515   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:57:24.078608   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:57:24.111688   28319 logs.go:274] 0 containers: []
	W0601 11:57:24.111701   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:57:24.111764   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:57:24.143708   28319 logs.go:274] 0 containers: []
	W0601 11:57:24.143721   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:57:24.143783   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:57:24.175299   28319 logs.go:274] 0 containers: []
	W0601 11:57:24.175313   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:57:24.175387   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:57:24.210853   28319 logs.go:274] 0 containers: []
	W0601 11:57:24.210866   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:57:24.210936   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:57:24.245012   28319 logs.go:274] 0 containers: []
	W0601 11:57:24.245026   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:57:24.245095   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:57:24.281872   28319 logs.go:274] 0 containers: []
	W0601 11:57:24.281885   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:57:24.281892   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:57:24.281899   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:57:24.299283   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:57:24.299300   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:26.356685   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057383504s)
	I0601 11:57:26.356862   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:57:26.356871   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:57:26.401842   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:57:26.401859   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:57:26.414869   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:57:26.414883   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:57:26.467468   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:57:28.967580   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:29.008160   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:57:29.040269   28319 logs.go:274] 0 containers: []
	W0601 11:57:29.040281   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:57:29.040356   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:57:29.072206   28319 logs.go:274] 0 containers: []
	W0601 11:57:29.072220   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:57:29.072281   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:57:29.105279   28319 logs.go:274] 0 containers: []
	W0601 11:57:29.105291   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:57:29.105349   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:57:29.134791   28319 logs.go:274] 0 containers: []
	W0601 11:57:29.134804   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:57:29.134860   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:57:29.164913   28319 logs.go:274] 0 containers: []
	W0601 11:57:29.164925   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:57:29.164979   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:57:29.194121   28319 logs.go:274] 0 containers: []
	W0601 11:57:29.194134   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:57:29.194190   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:57:29.224082   28319 logs.go:274] 0 containers: []
	W0601 11:57:29.224094   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:57:29.224148   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:57:29.254968   28319 logs.go:274] 0 containers: []
	W0601 11:57:29.255008   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:57:29.255015   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:57:29.255022   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:57:29.267556   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:57:29.267568   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:27.711091   28155 out.go:204]   - Configuring RBAC rules ...
	I0601 11:57:28.165356   28155 cni.go:95] Creating CNI manager for ""
	I0601 11:57:28.165369   28155 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:57:28.165393   28155 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 11:57:28.165467   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:28.165513   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=af273d6c1d2efba123f39c341ef4e1b2746b42f1 minikube.k8s.io/name=no-preload-20220601115057-16804 minikube.k8s.io/updated_at=2022_06_01T11_57_28_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:28.346909   28155 ops.go:34] apiserver oom_adj: -16
	I0601 11:57:28.346923   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:28.908441   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:29.408460   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:29.909413   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:30.407861   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:30.907854   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:31.408383   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:31.907991   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:32.409782   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:31.323029   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055474965s)
	I0601 11:57:31.323132   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:57:31.323140   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:57:31.365311   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:57:31.365325   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:57:31.377327   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:57:31.377341   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:57:31.435595   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:57:33.936600   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:34.008912   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:57:34.040625   28319 logs.go:274] 0 containers: []
	W0601 11:57:34.040639   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:57:34.040694   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:57:34.072501   28319 logs.go:274] 0 containers: []
	W0601 11:57:34.072513   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:57:34.072569   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:57:34.104579   28319 logs.go:274] 0 containers: []
	W0601 11:57:34.104591   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:57:34.104653   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:57:34.135775   28319 logs.go:274] 0 containers: []
	W0601 11:57:34.135787   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:57:34.135845   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:57:34.166312   28319 logs.go:274] 0 containers: []
	W0601 11:57:34.166323   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:57:34.166381   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:57:34.195560   28319 logs.go:274] 0 containers: []
	W0601 11:57:34.195572   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:57:34.195627   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:57:34.224692   28319 logs.go:274] 0 containers: []
	W0601 11:57:34.224703   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:57:34.224765   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:57:34.255698   28319 logs.go:274] 0 containers: []
	W0601 11:57:34.255710   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:57:34.255717   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:57:34.255727   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:57:34.300652   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:57:34.300667   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:57:34.313320   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:57:34.313334   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:57:34.368671   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:57:34.368683   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:57:34.368690   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:57:34.381336   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:57:34.381349   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:32.909883   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:33.407682   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:33.907836   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:34.408286   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:34.907960   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:35.408597   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:35.908193   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:36.407746   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:36.909619   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:37.407908   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:36.441359   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060024322s)
	I0601 11:57:38.943165   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:39.007618   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:57:39.046794   28319 logs.go:274] 0 containers: []
	W0601 11:57:39.046808   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:57:39.046868   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:57:39.079598   28319 logs.go:274] 0 containers: []
	W0601 11:57:39.079612   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:57:39.079683   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:57:39.109592   28319 logs.go:274] 0 containers: []
	W0601 11:57:39.109604   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:57:39.109661   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:57:39.140083   28319 logs.go:274] 0 containers: []
	W0601 11:57:39.140095   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:57:39.140151   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:57:39.170917   28319 logs.go:274] 0 containers: []
	W0601 11:57:39.170929   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:57:39.170987   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:57:39.200633   28319 logs.go:274] 0 containers: []
	W0601 11:57:39.200644   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:57:39.200698   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:57:39.232233   28319 logs.go:274] 0 containers: []
	W0601 11:57:39.232274   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:57:39.232332   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:57:39.262769   28319 logs.go:274] 0 containers: []
	W0601 11:57:39.262781   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:57:39.262788   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:57:39.262794   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:37.908968   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:38.407947   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:38.907834   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:39.408342   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:39.909689   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:40.407680   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:40.459929   28155 kubeadm.go:1045] duration metric: took 12.294665413s to wait for elevateKubeSystemPrivileges.
	I0601 11:57:40.459949   28155 kubeadm.go:397] StartCluster complete in 5m22.663179926s
	I0601 11:57:40.459970   28155 settings.go:142] acquiring lock: {Name:mk630944d7da2d6f5ad8bc7bd2a815aad6529f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:57:40.460046   28155 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:57:40.460585   28155 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk924f4ba24fa74a0cb052299e0cc4e825b209a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:57:40.978234   28155 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220601115057-16804" rescaled to 1
	I0601 11:57:40.978283   28155 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 11:57:40.978297   28155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 11:57:40.978323   28155 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0601 11:57:40.999829   28155 out.go:177] * Verifying Kubernetes components...
	I0601 11:57:40.978489   28155 config.go:178] Loaded profile config "no-preload-20220601115057-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:57:40.999900   28155 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220601115057-16804"
	I0601 11:57:40.999902   28155 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220601115057-16804"
	I0601 11:57:40.999902   28155 addons.go:65] Setting metrics-server=true in profile "no-preload-20220601115057-16804"
	I0601 11:57:40.999907   28155 addons.go:65] Setting dashboard=true in profile "no-preload-20220601115057-16804"
	I0601 11:57:41.040649   28155 addons.go:153] Setting addon dashboard=true in "no-preload-20220601115057-16804"
	I0601 11:57:41.040654   28155 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220601115057-16804"
	W0601 11:57:41.040664   28155 addons.go:165] addon dashboard should already be in state true
	I0601 11:57:41.040701   28155 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220601115057-16804"
	I0601 11:57:41.040717   28155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	W0601 11:57:41.040689   28155 addons.go:165] addon storage-provisioner should already be in state true
	I0601 11:57:41.040654   28155 addons.go:153] Setting addon metrics-server=true in "no-preload-20220601115057-16804"
	W0601 11:57:41.040775   28155 addons.go:165] addon metrics-server should already be in state true
	I0601 11:57:41.040778   28155 host.go:66] Checking if "no-preload-20220601115057-16804" exists ...
	I0601 11:57:41.040748   28155 host.go:66] Checking if "no-preload-20220601115057-16804" exists ...
	I0601 11:57:41.040811   28155 host.go:66] Checking if "no-preload-20220601115057-16804" exists ...
	I0601 11:57:41.041053   28155 cli_runner.go:164] Run: docker container inspect no-preload-20220601115057-16804 --format={{.State.Status}}
	I0601 11:57:41.041121   28155 cli_runner.go:164] Run: docker container inspect no-preload-20220601115057-16804 --format={{.State.Status}}
	I0601 11:57:41.041154   28155 cli_runner.go:164] Run: docker container inspect no-preload-20220601115057-16804 --format={{.State.Status}}
	I0601 11:57:41.041181   28155 cli_runner.go:164] Run: docker container inspect no-preload-20220601115057-16804 --format={{.State.Status}}
	I0601 11:57:41.098903   28155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20220601115057-16804
	I0601 11:57:41.098931   28155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 11:57:41.172010   28155 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220601115057-16804"
	I0601 11:57:41.211589   28155 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	W0601 11:57:41.211668   28155 addons.go:165] addon default-storageclass should already be in state true
	I0601 11:57:41.189830   28155 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 11:57:41.211763   28155 host.go:66] Checking if "no-preload-20220601115057-16804" exists ...
	I0601 11:57:41.235116   28155 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 11:57:41.271485   28155 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:57:41.272074   28155 cli_runner.go:164] Run: docker container inspect no-preload-20220601115057-16804 --format={{.State.Status}}
	I0601 11:57:41.292117   28155 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0601 11:57:41.292144   28155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 11:57:41.292159   28155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 11:57:41.313668   28155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601115057-16804
	I0601 11:57:41.313669   28155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601115057-16804
	I0601 11:57:41.387138   28155 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 11:57:41.347801   28155 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220601115057-16804" to be "Ready" ...
	I0601 11:57:41.446511   28155 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 11:57:41.446542   28155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 11:57:41.447209   28155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601115057-16804
	I0601 11:57:41.454733   28155 node_ready.go:49] node "no-preload-20220601115057-16804" has status "Ready":"True"
	I0601 11:57:41.454795   28155 node_ready.go:38] duration metric: took 9.276451ms waiting for node "no-preload-20220601115057-16804" to be "Ready" ...
	I0601 11:57:41.454812   28155 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:57:41.465521   28155 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-g4gfh" in "kube-system" namespace to be "Ready" ...
	I0601 11:57:41.472437   28155 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 11:57:41.472476   28155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 11:57:41.472621   28155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601115057-16804
	I0601 11:57:41.497682   28155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59705 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/no-preload-20220601115057-16804/id_rsa Username:docker}
	I0601 11:57:41.502048   28155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59705 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/no-preload-20220601115057-16804/id_rsa Username:docker}
	I0601 11:57:41.577692   28155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59705 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/no-preload-20220601115057-16804/id_rsa Username:docker}
	I0601 11:57:41.582026   28155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59705 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/no-preload-20220601115057-16804/id_rsa Username:docker}
	I0601 11:57:41.655324   28155 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 11:57:41.655343   28155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 11:57:41.736951   28155 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 11:57:41.736970   28155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 11:57:41.745184   28155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:57:41.829738   28155 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:57:41.829758   28155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 11:57:41.836663   28155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 11:57:41.932059   28155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:57:41.933127   28155 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 11:57:41.933144   28155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 11:57:41.963220   28155 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 11:57:41.963233   28155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 11:57:42.145334   28155 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 11:57:42.145351   28155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 11:57:42.255177   28155 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 11:57:42.255192   28155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 11:57:42.436801   28155 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 11:57:42.436821   28155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 11:57:42.530525   28155 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.431587535s)
	I0601 11:57:42.541721   28155 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 11:57:42.566164   28155 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0601 11:57:42.566174   28155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 11:57:42.649431   28155 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 11:57:42.649452   28155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 11:57:42.744750   28155 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 11:57:42.744773   28155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 11:57:42.844258   28155 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:57:42.844276   28155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 11:57:42.942530   28155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:57:43.031299   28155 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.0992243s)
	I0601 11:57:43.031327   28155 addons.go:386] Verifying addon metrics-server=true in "no-preload-20220601115057-16804"
	I0601 11:57:43.486892   28155 pod_ready.go:102] pod "coredns-64897985d-g4gfh" in "kube-system" namespace has status "Ready":"False"
	I0601 11:57:43.956967   28155 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.014417476s)
	I0601 11:57:43.981671   28155 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0601 11:57:41.329410   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.066626686s)
	I0601 11:57:41.329597   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:57:41.329608   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:57:41.383544   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:57:41.383564   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:57:41.408721   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:57:41.408743   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:57:41.509315   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:57:41.509346   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:57:41.509369   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:57:44.030515   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:44.507644   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:57:44.537454   28319 logs.go:274] 0 containers: []
	W0601 11:57:44.537481   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:57:44.537554   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:57:44.568183   28319 logs.go:274] 0 containers: []
	W0601 11:57:44.568197   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:57:44.568261   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:57:44.599536   28319 logs.go:274] 0 containers: []
	W0601 11:57:44.599547   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:57:44.599606   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:57:44.630140   28319 logs.go:274] 0 containers: []
	W0601 11:57:44.630154   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:57:44.630217   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:57:44.660777   28319 logs.go:274] 0 containers: []
	W0601 11:57:44.660790   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:57:44.660846   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:57:44.691042   28319 logs.go:274] 0 containers: []
	W0601 11:57:44.691055   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:57:44.691143   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:57:44.720629   28319 logs.go:274] 0 containers: []
	W0601 11:57:44.720641   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:57:44.720699   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:57:44.750426   28319 logs.go:274] 0 containers: []
	W0601 11:57:44.750438   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:57:44.750445   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:57:44.750452   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:57:44.765309   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:57:44.765324   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:44.022335   28155 addons.go:417] enableAddons completed in 3.044053036s
	I0601 11:57:45.987315   28155 pod_ready.go:102] pod "coredns-64897985d-g4gfh" in "kube-system" namespace has status "Ready":"False"
	I0601 11:57:46.984563   28155 pod_ready.go:92] pod "coredns-64897985d-g4gfh" in "kube-system" namespace has status "Ready":"True"
	I0601 11:57:46.984578   28155 pod_ready.go:81] duration metric: took 5.51910266s waiting for pod "coredns-64897985d-g4gfh" in "kube-system" namespace to be "Ready" ...
	I0601 11:57:46.984584   28155 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-t97fz" in "kube-system" namespace to be "Ready" ...
	I0601 11:57:46.990513   28155 pod_ready.go:92] pod "coredns-64897985d-t97fz" in "kube-system" namespace has status "Ready":"True"
	I0601 11:57:46.990522   28155 pod_ready.go:81] duration metric: took 5.933055ms waiting for pod "coredns-64897985d-t97fz" in "kube-system" namespace to be "Ready" ...
	I0601 11:57:46.990528   28155 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20220601115057-16804" in "kube-system" namespace to be "Ready" ...
	I0601 11:57:46.995569   28155 pod_ready.go:92] pod "etcd-no-preload-20220601115057-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 11:57:46.995578   28155 pod_ready.go:81] duration metric: took 5.045027ms waiting for pod "etcd-no-preload-20220601115057-16804" in "kube-system" namespace to be "Ready" ...
	I0601 11:57:46.995584   28155 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20220601115057-16804" in "kube-system" namespace to be "Ready" ...
	I0601 11:57:47.000562   28155 pod_ready.go:92] pod "kube-apiserver-no-preload-20220601115057-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 11:57:47.000571   28155 pod_ready.go:81] duration metric: took 4.982774ms waiting for pod "kube-apiserver-no-preload-20220601115057-16804" in "kube-system" namespace to be "Ready" ...
	I0601 11:57:47.000578   28155 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20220601115057-16804" in "kube-system" namespace to be "Ready" ...
	I0601 11:57:47.005154   28155 pod_ready.go:92] pod "kube-controller-manager-no-preload-20220601115057-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 11:57:47.005165   28155 pod_ready.go:81] duration metric: took 4.580517ms waiting for pod "kube-controller-manager-no-preload-20220601115057-16804" in "kube-system" namespace to be "Ready" ...
	I0601 11:57:47.005172   28155 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-77tsv" in "kube-system" namespace to be "Ready" ...
	I0601 11:57:47.383579   28155 pod_ready.go:92] pod "kube-proxy-77tsv" in "kube-system" namespace has status "Ready":"True"
	I0601 11:57:47.383590   28155 pod_ready.go:81] duration metric: took 378.417828ms waiting for pod "kube-proxy-77tsv" in "kube-system" namespace to be "Ready" ...
	I0601 11:57:47.383597   28155 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20220601115057-16804" in "kube-system" namespace to be "Ready" ...
	I0601 11:57:47.782529   28155 pod_ready.go:92] pod "kube-scheduler-no-preload-20220601115057-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 11:57:47.782539   28155 pod_ready.go:81] duration metric: took 398.94254ms waiting for pod "kube-scheduler-no-preload-20220601115057-16804" in "kube-system" namespace to be "Ready" ...
	I0601 11:57:47.782547   28155 pod_ready.go:38] duration metric: took 6.327785655s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:57:47.782563   28155 api_server.go:51] waiting for apiserver process to appear ...
	I0601 11:57:47.782617   28155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:47.795517   28155 api_server.go:71] duration metric: took 6.817292849s to wait for apiserver process to appear ...
	I0601 11:57:47.795534   28155 api_server.go:87] waiting for apiserver healthz status ...
	I0601 11:57:47.795543   28155 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59709/healthz ...
	I0601 11:57:47.801557   28155 api_server.go:266] https://127.0.0.1:59709/healthz returned 200:
	ok
	I0601 11:57:47.802639   28155 api_server.go:140] control plane version: v1.23.6
	I0601 11:57:47.802647   28155 api_server.go:130] duration metric: took 7.108692ms to wait for apiserver health ...
	I0601 11:57:47.802654   28155 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 11:57:47.987584   28155 system_pods.go:59] 9 kube-system pods found
	I0601 11:57:47.987599   28155 system_pods.go:61] "coredns-64897985d-g4gfh" [5a668ae2-1ba8-4ae9-9c6a-ac07279e31f9] Running
	I0601 11:57:47.987604   28155 system_pods.go:61] "coredns-64897985d-t97fz" [e084f502-bc7c-4ba5-9f07-990582d89dcd] Running
	I0601 11:57:47.987607   28155 system_pods.go:61] "etcd-no-preload-20220601115057-16804" [07565dba-74b1-4ce7-84b5-6dc3870c5f14] Running
	I0601 11:57:47.987611   28155 system_pods.go:61] "kube-apiserver-no-preload-20220601115057-16804" [6877c44e-2636-4e51-9471-f303d0d0bd86] Running
	I0601 11:57:47.987615   28155 system_pods.go:61] "kube-controller-manager-no-preload-20220601115057-16804" [9a06a3f1-e0cd-412f-96b3-7d4e551347e4] Running
	I0601 11:57:47.987618   28155 system_pods.go:61] "kube-proxy-77tsv" [9fb29050-1356-4744-bd4d-456dbacdf15c] Running
	I0601 11:57:47.987622   28155 system_pods.go:61] "kube-scheduler-no-preload-20220601115057-16804" [40175cd4-f440-44d8-b296-c7283261a1e4] Running
	I0601 11:57:47.987626   28155 system_pods.go:61] "metrics-server-b955d9d8-kz2wj" [85328a99-1f1c-4ee1-b140-b8b04cc702da] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 11:57:47.987630   28155 system_pods.go:61] "storage-provisioner" [60233eed-e0b4-4f81-bd4e-ec53371ffc27] Running
	I0601 11:57:47.987634   28155 system_pods.go:74] duration metric: took 184.979327ms to wait for pod list to return data ...
	I0601 11:57:47.987639   28155 default_sa.go:34] waiting for default service account to be created ...
	I0601 11:57:48.221245   28155 default_sa.go:45] found service account: "default"
	I0601 11:57:48.221266   28155 default_sa.go:55] duration metric: took 233.624988ms for default service account to be created ...
	I0601 11:57:48.221277   28155 system_pods.go:116] waiting for k8s-apps to be running ...
	I0601 11:57:48.386943   28155 system_pods.go:86] 8 kube-system pods found
	I0601 11:57:48.386964   28155 system_pods.go:89] "coredns-64897985d-g4gfh" [5a668ae2-1ba8-4ae9-9c6a-ac07279e31f9] Running
	I0601 11:57:48.386974   28155 system_pods.go:89] "etcd-no-preload-20220601115057-16804" [07565dba-74b1-4ce7-84b5-6dc3870c5f14] Running
	I0601 11:57:48.386984   28155 system_pods.go:89] "kube-apiserver-no-preload-20220601115057-16804" [6877c44e-2636-4e51-9471-f303d0d0bd86] Running
	I0601 11:57:48.386995   28155 system_pods.go:89] "kube-controller-manager-no-preload-20220601115057-16804" [9a06a3f1-e0cd-412f-96b3-7d4e551347e4] Running
	I0601 11:57:48.387005   28155 system_pods.go:89] "kube-proxy-77tsv" [9fb29050-1356-4744-bd4d-456dbacdf15c] Running
	I0601 11:57:48.387025   28155 system_pods.go:89] "kube-scheduler-no-preload-20220601115057-16804" [40175cd4-f440-44d8-b296-c7283261a1e4] Running
	I0601 11:57:48.387032   28155 system_pods.go:89] "metrics-server-b955d9d8-kz2wj" [85328a99-1f1c-4ee1-b140-b8b04cc702da] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 11:57:48.387038   28155 system_pods.go:89] "storage-provisioner" [60233eed-e0b4-4f81-bd4e-ec53371ffc27] Running
	I0601 11:57:48.387045   28155 system_pods.go:126] duration metric: took 165.76319ms to wait for k8s-apps to be running ...
	I0601 11:57:48.387051   28155 system_svc.go:44] waiting for kubelet service to be running ....
	I0601 11:57:48.387105   28155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:57:48.423774   28155 system_svc.go:56] duration metric: took 36.719143ms WaitForService to wait for kubelet.
	I0601 11:57:48.423792   28155 kubeadm.go:572] duration metric: took 7.445578877s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0601 11:57:48.423811   28155 node_conditions.go:102] verifying NodePressure condition ...
	I0601 11:57:48.582300   28155 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 11:57:48.582312   28155 node_conditions.go:123] node cpu capacity is 6
	I0601 11:57:48.582320   28155 node_conditions.go:105] duration metric: took 158.507046ms to run NodePressure ...
	I0601 11:57:48.582327   28155 start.go:213] waiting for startup goroutines ...
	I0601 11:57:48.614642   28155 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0601 11:57:48.661447   28155 out.go:177] * Done! kubectl is now configured to use "no-preload-20220601115057-16804" cluster and "default" namespace by default
	I0601 11:57:46.833468   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.06815266s)
	I0601 11:57:46.833611   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:57:46.833623   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:57:46.894511   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:57:46.894539   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:57:46.907075   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:57:46.907090   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:57:46.971671   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:57:49.472151   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:49.507935   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:57:49.537886   28319 logs.go:274] 0 containers: []
	W0601 11:57:49.537898   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:57:49.537960   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:57:49.568803   28319 logs.go:274] 0 containers: []
	W0601 11:57:49.568816   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:57:49.568872   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:57:49.598891   28319 logs.go:274] 0 containers: []
	W0601 11:57:49.598903   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:57:49.598962   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:57:49.628803   28319 logs.go:274] 0 containers: []
	W0601 11:57:49.628815   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:57:49.628874   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:57:49.660107   28319 logs.go:274] 0 containers: []
	W0601 11:57:49.660118   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:57:49.660209   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:57:49.691421   28319 logs.go:274] 0 containers: []
	W0601 11:57:49.691437   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:57:49.691507   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:57:49.722844   28319 logs.go:274] 0 containers: []
	W0601 11:57:49.722857   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:57:49.722911   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:57:49.755171   28319 logs.go:274] 0 containers: []
	W0601 11:57:49.755183   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:57:49.755191   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:57:49.755211   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:57:49.768071   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:57:49.768082   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:51.830872   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.06280221s)
	I0601 11:57:51.830991   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:57:51.830999   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:57:51.895350   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:57:51.895372   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:57:51.910561   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:57:51.910601   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:57:51.975211   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:57:54.475645   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:54.507404   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:57:54.546927   28319 logs.go:274] 0 containers: []
	W0601 11:57:54.546940   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:57:54.547000   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:57:54.579713   28319 logs.go:274] 0 containers: []
	W0601 11:57:54.579728   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:57:54.579797   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:57:54.614843   28319 logs.go:274] 0 containers: []
	W0601 11:57:54.614860   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:57:54.614948   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:57:54.651551   28319 logs.go:274] 0 containers: []
	W0601 11:57:54.651565   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:57:54.651624   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:57:54.687625   28319 logs.go:274] 0 containers: []
	W0601 11:57:54.687640   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:57:54.687712   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:57:54.723794   28319 logs.go:274] 0 containers: []
	W0601 11:57:54.723808   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:57:54.723872   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:57:54.759036   28319 logs.go:274] 0 containers: []
	W0601 11:57:54.759050   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:57:54.759111   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:57:54.791361   28319 logs.go:274] 0 containers: []
	W0601 11:57:54.791375   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:57:54.791382   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:57:54.791390   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:57:54.839700   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:57:54.839716   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:57:54.854532   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:57:54.854547   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:57:54.915142   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:57:54.915157   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:57:54.915164   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:57:54.928393   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:57:54.928405   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:56.983268   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054875531s)
	I0601 11:57:59.485573   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:59.496486   28319 kubeadm.go:630] restartCluster took 4m4.930290056s
	W0601 11:57:59.496562   28319 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0601 11:57:59.496576   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 11:57:59.913633   28319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:57:59.923079   28319 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:57:59.931076   28319 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:57:59.931127   28319 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:57:59.939179   28319 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:57:59.939204   28319 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:58:00.683895   28319 out.go:204]   - Generating certificates and keys ...
	I0601 11:58:01.523528   28319 out.go:204]   - Booting up control plane ...
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-01 18:52:14 UTC, end at Wed 2022-06-01 18:58:44 UTC. --
	Jun 01 18:56:57 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:56:57.455709967Z" level=info msg="ignoring event" container=965146ffacad64e9d584ded7091b7ecdb2747db12675878a59a4b9155297301a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:56:57 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:56:57.662879005Z" level=info msg="ignoring event" container=f0fb56665c1d1eefbd86639fc1143061391bd15d1806b671e6ede32b25c70cb6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:57:07 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:07.727957851Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=d276f8d3df23be81a96b202fe89d29ef1e399e264c0239c86a3e2cfff8ae44f7
	Jun 01 18:57:07 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:07.757529771Z" level=info msg="ignoring event" container=d276f8d3df23be81a96b202fe89d29ef1e399e264c0239c86a3e2cfff8ae44f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:57:07 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:07.873040933Z" level=info msg="ignoring event" container=72a124d3de26891a46029aef7ff25a8fc05ae016371085db9e7bfa75fcf2761a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:57:17 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:17.964549887Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=616b35d7cfd8399befecc2e9313100de26bd13682553039ca5105b429fe9405f
	Jun 01 18:57:18 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:18.021035647Z" level=info msg="ignoring event" container=616b35d7cfd8399befecc2e9313100de26bd13682553039ca5105b429fe9405f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:57:18 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:18.121322161Z" level=info msg="ignoring event" container=a666382066585752b99d3ea2b0612aa09dbaf132d6fe010fc8e99f758971dc2d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:57:18 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:18.223216098Z" level=info msg="ignoring event" container=5e1e864e6525f7d441376e80d2cd57ae3566a0a7f2e919b2ae7d23492c82ff40 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:57:18 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:18.322728007Z" level=info msg="ignoring event" container=dcedf6ae48a5f2ca7b69578459b074e060aaf794d440fb1e70c03d54fb9654a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:57:18 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:18.449998975Z" level=info msg="ignoring event" container=92476de0b2fa718d9ae037567aff75a8f6923252987b4a69a1e54b3a60d329c4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:57:44 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:44.152194441Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 18:57:44 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:44.152684609Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 18:57:44 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:44.154000434Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 18:57:45 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:45.518368421Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jun 01 18:57:45 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:45.733924115Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jun 01 18:57:47 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:47.676468287Z" level=info msg="ignoring event" container=5576ebcd6b3efa134b9c442b0647348cef29804d2b542784afe1b97b7a2dc22e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:57:47 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:47.848223410Z" level=info msg="ignoring event" container=f869517dd5468d6a46bb30af6635548849d08d4fcd3bbde5c425eaf3d60cfbfc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:57:49 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:49.354790084Z" level=info msg="ignoring event" container=c16ac3e56aaa7f2f29a5982339afdeb3686f792e5c1b87df15682be069de7dd7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:57:49 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:49.456786438Z" level=warning msg="reference for unknown type: " digest="sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2" remote="docker.io/kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2"
	Jun 01 18:57:50 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:50.321352553Z" level=info msg="ignoring event" container=01c76d66326a0454297a81f8f616dce83a05cd37a6537e261b874840deee1f08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:57:59 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:59.374088220Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 18:57:59 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:59.374132104Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 18:57:59 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:59.375540565Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 18:58:07 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:58:07.452185549Z" level=info msg="ignoring event" container=13dbafc5af21fed703e0494d3cb234721f4e631fb759cecabf5c8020b24484f9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	13dbafc5af21f       a90209bb39e3d                                                                                    37 seconds ago       Exited              dashboard-metrics-scraper   2                   cd910f8cc8526
	7d328481c5cd4       kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2   50 seconds ago       Running             kubernetes-dashboard        0                   f4d13ae8da737
	e2cc248695c73       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   2b1f146b179ae
	ba2ed58902e20       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   71f7928ea4b33
	c7958e3694523       4c03754524064                                                                                    About a minute ago   Running             kube-proxy                  0                   f2eac62c164d2
	b0b923861477b       8fa62c12256df                                                                                    About a minute ago   Running             kube-apiserver              2                   4772e1d81d09c
	8b39e72f9af37       25f8c7f3da61c                                                                                    About a minute ago   Running             etcd                        2                   4ac7db1243b2f
	14b019ef5d0f3       595f327f224a4                                                                                    About a minute ago   Running             kube-scheduler              2                   f7df7ad08e5ec
	3b5d6839a5a99       df7b72818ad2e                                                                                    About a minute ago   Running             kube-controller-manager     2                   46160156c799a
	
	* 
	* ==> coredns [ba2ed58902e2] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220601115057-16804
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220601115057-16804
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af273d6c1d2efba123f39c341ef4e1b2746b42f1
	                    minikube.k8s.io/name=no-preload-20220601115057-16804
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T11_57_28_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 18:57:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220601115057-16804
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Jun 2022 18:58:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 18:58:41 +0000   Wed, 01 Jun 2022 18:58:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 18:58:41 +0000   Wed, 01 Jun 2022 18:58:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 18:58:41 +0000   Wed, 01 Jun 2022 18:58:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Jun 2022 18:58:41 +0000   Wed, 01 Jun 2022 18:58:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    no-preload-20220601115057-16804
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	  System UUID:                3a379177-f2a3-4802-80f1-2537a7a88138
	  Boot ID:                    60fb2c64-72ec-41ec-9cdf-c18d3fde7c60
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-g4gfh                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     64s
	  kube-system                 etcd-no-preload-20220601115057-16804                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         78s
	  kube-system                 kube-apiserver-no-preload-20220601115057-16804             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-controller-manager-no-preload-20220601115057-16804    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-proxy-77tsv                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 kube-scheduler-no-preload-20220601115057-16804             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 metrics-server-b955d9d8-kz2wj                              100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         62s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-59knw                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kubernetes-dashboard        kubernetes-dashboard-8469778f77-rnkpm                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 61s                kube-proxy  
	  Normal  NodeHasSufficientMemory  83s (x4 over 83s)  kubelet     Node no-preload-20220601115057-16804 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    83s (x4 over 83s)  kubelet     Node no-preload-20220601115057-16804 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     83s (x3 over 83s)  kubelet     Node no-preload-20220601115057-16804 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  83s                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 83s                kubelet     Starting kubelet.
	  Normal  Starting                 76s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  76s                kubelet     Node no-preload-20220601115057-16804 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    76s                kubelet     Node no-preload-20220601115057-16804 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     76s                kubelet     Node no-preload-20220601115057-16804 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             76s                kubelet     Node no-preload-20220601115057-16804 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  76s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                66s                kubelet     Node no-preload-20220601115057-16804 status is now: NodeReady
	  Normal  Starting                 3s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3s (x2 over 3s)    kubelet     Node no-preload-20220601115057-16804 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s (x2 over 3s)    kubelet     Node no-preload-20220601115057-16804 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s (x2 over 3s)    kubelet     Node no-preload-20220601115057-16804 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3s                 kubelet     Node no-preload-20220601115057-16804 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                 kubelet     Node no-preload-20220601115057-16804 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [8b39e72f9af3] <==
	* {"level":"info","ts":"2022-06-01T18:57:22.271Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2022-06-01T18:57:22.271Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2022-06-01T18:57:22.277Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-01T18:57:22.277Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-01T18:57:22.277Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-01T18:57:22.277Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-01T18:57:22.278Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-01T18:57:22.862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2022-06-01T18:57:22.862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-01T18:57:22.862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2022-06-01T18:57:22.862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2022-06-01T18:57:22.862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-01T18:57:22.862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2022-06-01T18:57:22.862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-01T18:57:22.862Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T18:57:22.863Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T18:57:22.863Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T18:57:22.863Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T18:57:22.863Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:no-preload-20220601115057-16804 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T18:57:22.863Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T18:57:22.863Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T18:57:22.863Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T18:57:22.863Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T18:57:22.864Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-01T18:57:22.865Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	
	* 
	* ==> kernel <==
	*  18:58:44 up  1:01,  0 users,  load average: 0.51, 0.89, 1.06
	Linux no-preload-20220601115057-16804 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [b0b923861477] <==
	* I0601 18:57:26.235766       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0601 18:57:26.259882       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0601 18:57:26.304430       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0601 18:57:26.308157       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0601 18:57:26.309310       1 controller.go:611] quota admission added evaluator for: endpoints
	I0601 18:57:26.312967       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0601 18:57:27.096999       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0601 18:57:27.940164       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0601 18:57:27.947591       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0601 18:57:27.955362       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0601 18:57:28.145956       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0601 18:57:40.531931       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0601 18:57:40.880219       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0601 18:57:42.639364       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0601 18:57:42.964846       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.107.23.182]
	W0601 18:57:43.752250       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 18:57:43.752508       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 18:57:43.752605       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0601 18:57:43.943686       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.99.101.199]
	I0601 18:57:43.957667       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.109.135.92]
	W0601 18:58:43.709304       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 18:58:43.709403       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 18:58:43.709410       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [3b5d6839a5a9] <==
	* I0601 18:57:43.765223       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 18:57:43.765263       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 18:57:43.770141       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0601 18:57:43.772428       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 18:57:43.772502       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 18:57:43.776849       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 18:57:43.777210       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 18:57:43.784001       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 18:57:43.784031       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 18:57:43.790178       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 18:57:43.790476       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 18:57:43.836853       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-59knw"
	I0601 18:57:43.854862       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-rnkpm"
	E0601 18:58:41.292227       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0601 18:58:41.293398       1 event.go:294] "Event occurred" object="no-preload-20220601115057-16804" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node no-preload-20220601115057-16804 status is now: NodeNotReady"
	W0601 18:58:41.297555       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	I0601 18:58:41.301429       1 event.go:294] "Event occurred" object="kube-system/etcd-no-preload-20220601115057-16804" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 18:58:41.305565       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77-rnkpm" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 18:58:41.395144       1 event.go:294] "Event occurred" object="kube-system/kube-proxy-77tsv" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 18:58:41.400980       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d-g4gfh" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 18:58:41.407465       1 event.go:294] "Event occurred" object="kube-system/storage-provisioner" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 18:58:41.494747       1 event.go:294] "Event occurred" object="kube-system/kube-scheduler-no-preload-20220601115057-16804" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 18:58:41.501177       1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager-no-preload-20220601115057-16804" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 18:58:41.507083       1 node_lifecycle_controller.go:1163] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0601 18:58:41.507361       1 event.go:294] "Event occurred" object="kube-system/kube-apiserver-no-preload-20220601115057-16804" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	* 
	* ==> kube-proxy [c7958e369452] <==
	* I0601 18:57:42.453004       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0601 18:57:42.453640       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0601 18:57:42.453728       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 18:57:42.636547       1 server_others.go:206] "Using iptables Proxier"
	I0601 18:57:42.636590       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 18:57:42.636598       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 18:57:42.636614       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 18:57:42.636881       1 server.go:656] "Version info" version="v1.23.6"
	I0601 18:57:42.637580       1 config.go:226] "Starting endpoint slice config controller"
	I0601 18:57:42.637604       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 18:57:42.637645       1 config.go:317] "Starting service config controller"
	I0601 18:57:42.637648       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 18:57:42.737986       1 shared_informer.go:247] Caches are synced for service config 
	I0601 18:57:42.738191       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [14b019ef5d0f] <==
	* E0601 18:57:25.066282       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0601 18:57:25.066645       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0601 18:57:25.066675       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0601 18:57:25.066843       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 18:57:25.066856       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 18:57:25.066936       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 18:57:25.067019       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0601 18:57:25.067202       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 18:57:25.067237       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0601 18:57:25.904963       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 18:57:25.905002       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0601 18:57:25.905763       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 18:57:25.905795       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0601 18:57:25.953099       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 18:57:25.953150       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0601 18:57:25.977555       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 18:57:25.977592       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 18:57:25.991608       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0601 18:57:25.991644       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0601 18:57:25.996441       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0601 18:57:25.996477       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0601 18:57:26.083117       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0601 18:57:26.083171       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0601 18:57:26.427030       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0601 18:57:28.759621       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 18:52:14 UTC, end at Wed 2022-06-01 18:58:45 UTC. --
	Jun 01 18:58:42 no-preload-20220601115057-16804 kubelet[7281]: I0601 18:58:42.880680    7281 topology_manager.go:200] "Topology Admit Handler"
	Jun 01 18:58:42 no-preload-20220601115057-16804 kubelet[7281]: I0601 18:58:42.908867    7281 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4qs7\" (UniqueName: \"kubernetes.io/projected/5a668ae2-1ba8-4ae9-9c6a-ac07279e31f9-kube-api-access-p4qs7\") pod \"coredns-64897985d-g4gfh\" (UID: \"5a668ae2-1ba8-4ae9-9c6a-ac07279e31f9\") " pod="kube-system/coredns-64897985d-g4gfh"
	Jun 01 18:58:42 no-preload-20220601115057-16804 kubelet[7281]: I0601 18:58:42.909015    7281 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9fb29050-1356-4744-bd4d-456dbacdf15c-kube-proxy\") pod \"kube-proxy-77tsv\" (UID: \"9fb29050-1356-4744-bd4d-456dbacdf15c\") " pod="kube-system/kube-proxy-77tsv"
	Jun 01 18:58:42 no-preload-20220601115057-16804 kubelet[7281]: I0601 18:58:42.909036    7281 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/60233eed-e0b4-4f81-bd4e-ec53371ffc27-tmp\") pod \"storage-provisioner\" (UID: \"60233eed-e0b4-4f81-bd4e-ec53371ffc27\") " pod="kube-system/storage-provisioner"
	Jun 01 18:58:42 no-preload-20220601115057-16804 kubelet[7281]: I0601 18:58:42.909088    7281 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/85328a99-1f1c-4ee1-b140-b8b04cc702da-tmp-dir\") pod \"metrics-server-b955d9d8-kz2wj\" (UID: \"85328a99-1f1c-4ee1-b140-b8b04cc702da\") " pod="kube-system/metrics-server-b955d9d8-kz2wj"
	Jun 01 18:58:42 no-preload-20220601115057-16804 kubelet[7281]: I0601 18:58:42.909103    7281 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x297p\" (UniqueName: \"kubernetes.io/projected/60233eed-e0b4-4f81-bd4e-ec53371ffc27-kube-api-access-x297p\") pod \"storage-provisioner\" (UID: \"60233eed-e0b4-4f81-bd4e-ec53371ffc27\") " pod="kube-system/storage-provisioner"
	Jun 01 18:58:42 no-preload-20220601115057-16804 kubelet[7281]: I0601 18:58:42.909119    7281 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9a79845e-efd3-46b7-80bb-0c7309ca22ca-tmp-volume\") pod \"kubernetes-dashboard-8469778f77-rnkpm\" (UID: \"9a79845e-efd3-46b7-80bb-0c7309ca22ca\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-rnkpm"
	Jun 01 18:58:42 no-preload-20220601115057-16804 kubelet[7281]: I0601 18:58:42.909136    7281 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fb29050-1356-4744-bd4d-456dbacdf15c-lib-modules\") pod \"kube-proxy-77tsv\" (UID: \"9fb29050-1356-4744-bd4d-456dbacdf15c\") " pod="kube-system/kube-proxy-77tsv"
	Jun 01 18:58:42 no-preload-20220601115057-16804 kubelet[7281]: I0601 18:58:42.909155    7281 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/50c87a07-aa54-4366-9ac1-a31efd11fa2e-tmp-volume\") pod \"dashboard-metrics-scraper-56974995fc-59knw\" (UID: \"50c87a07-aa54-4366-9ac1-a31efd11fa2e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-59knw"
	Jun 01 18:58:42 no-preload-20220601115057-16804 kubelet[7281]: I0601 18:58:42.909210    7281 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnk96\" (UniqueName: \"kubernetes.io/projected/50c87a07-aa54-4366-9ac1-a31efd11fa2e-kube-api-access-wnk96\") pod \"dashboard-metrics-scraper-56974995fc-59knw\" (UID: \"50c87a07-aa54-4366-9ac1-a31efd11fa2e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-59knw"
	Jun 01 18:58:42 no-preload-20220601115057-16804 kubelet[7281]: I0601 18:58:42.909344    7281 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a668ae2-1ba8-4ae9-9c6a-ac07279e31f9-config-volume\") pod \"coredns-64897985d-g4gfh\" (UID: \"5a668ae2-1ba8-4ae9-9c6a-ac07279e31f9\") " pod="kube-system/coredns-64897985d-g4gfh"
	Jun 01 18:58:42 no-preload-20220601115057-16804 kubelet[7281]: I0601 18:58:42.909486    7281 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw4h4\" (UniqueName: \"kubernetes.io/projected/85328a99-1f1c-4ee1-b140-b8b04cc702da-kube-api-access-bw4h4\") pod \"metrics-server-b955d9d8-kz2wj\" (UID: \"85328a99-1f1c-4ee1-b140-b8b04cc702da\") " pod="kube-system/metrics-server-b955d9d8-kz2wj"
	Jun 01 18:58:42 no-preload-20220601115057-16804 kubelet[7281]: I0601 18:58:42.909557    7281 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fb29050-1356-4744-bd4d-456dbacdf15c-xtables-lock\") pod \"kube-proxy-77tsv\" (UID: \"9fb29050-1356-4744-bd4d-456dbacdf15c\") " pod="kube-system/kube-proxy-77tsv"
	Jun 01 18:58:42 no-preload-20220601115057-16804 kubelet[7281]: I0601 18:58:42.909608    7281 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz2z4\" (UniqueName: \"kubernetes.io/projected/9fb29050-1356-4744-bd4d-456dbacdf15c-kube-api-access-bz2z4\") pod \"kube-proxy-77tsv\" (UID: \"9fb29050-1356-4744-bd4d-456dbacdf15c\") " pod="kube-system/kube-proxy-77tsv"
	Jun 01 18:58:42 no-preload-20220601115057-16804 kubelet[7281]: I0601 18:58:42.909690    7281 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmx7v\" (UniqueName: \"kubernetes.io/projected/9a79845e-efd3-46b7-80bb-0c7309ca22ca-kube-api-access-hmx7v\") pod \"kubernetes-dashboard-8469778f77-rnkpm\" (UID: \"9a79845e-efd3-46b7-80bb-0c7309ca22ca\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-rnkpm"
	Jun 01 18:58:42 no-preload-20220601115057-16804 kubelet[7281]: I0601 18:58:42.909725    7281 reconciler.go:157] "Reconciler: start to sync state"
	Jun 01 18:58:44 no-preload-20220601115057-16804 kubelet[7281]: I0601 18:58:44.077052    7281 request.go:665] Waited for 1.15567193s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jun 01 18:58:44 no-preload-20220601115057-16804 kubelet[7281]: E0601 18:58:44.086155    7281 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"etcd-no-preload-20220601115057-16804\" already exists" pod="kube-system/etcd-no-preload-20220601115057-16804"
	Jun 01 18:58:44 no-preload-20220601115057-16804 kubelet[7281]: E0601 18:58:44.302409    7281 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-scheduler-no-preload-20220601115057-16804\" already exists" pod="kube-system/kube-scheduler-no-preload-20220601115057-16804"
	Jun 01 18:58:44 no-preload-20220601115057-16804 kubelet[7281]: E0601 18:58:44.480924    7281 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-no-preload-20220601115057-16804\" already exists" pod="kube-system/kube-controller-manager-no-preload-20220601115057-16804"
	Jun 01 18:58:44 no-preload-20220601115057-16804 kubelet[7281]: E0601 18:58:44.681458    7281 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-apiserver-no-preload-20220601115057-16804\" already exists" pod="kube-system/kube-apiserver-no-preload-20220601115057-16804"
	Jun 01 18:58:45 no-preload-20220601115057-16804 kubelet[7281]: E0601 18:58:45.038355    7281 remote_image.go:216] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 01 18:58:45 no-preload-20220601115057-16804 kubelet[7281]: E0601 18:58:45.038439    7281 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 01 18:58:45 no-preload-20220601115057-16804 kubelet[7281]: E0601 18:58:45.038529    7281 kuberuntime_manager.go:919] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-bw4h4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHa
ndler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMess
agePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-b955d9d8-kz2wj_kube-system(85328a99-1f1c-4ee1-b140-b8b04cc702da): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jun 01 18:58:45 no-preload-20220601115057-16804 kubelet[7281]: E0601 18:58:45.038577    7281 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-b955d9d8-kz2wj" podUID=85328a99-1f1c-4ee1-b140-b8b04cc702da
	
	* 
	* ==> kubernetes-dashboard [7d328481c5cd] <==
	* 2022/06/01 18:57:54 Using namespace: kubernetes-dashboard
	2022/06/01 18:57:54 Using in-cluster config to connect to apiserver
	2022/06/01 18:57:54 Using secret token for csrf signing
	2022/06/01 18:57:54 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/06/01 18:57:54 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/06/01 18:57:54 Successful initial request to the apiserver, version: v1.23.6
	2022/06/01 18:57:54 Generating JWE encryption key
	2022/06/01 18:57:54 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/06/01 18:57:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/06/01 18:57:54 Initializing JWE encryption key from synchronized object
	2022/06/01 18:57:54 Creating in-cluster Sidecar client
	2022/06/01 18:57:54 Serving insecurely on HTTP port: 9090
	2022/06/01 18:57:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/01 18:58:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/01 18:57:54 Starting overwatch
	
	* 
	* ==> storage-provisioner [e2cc248695c7] <==
	* I0601 18:57:43.658769       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0601 18:57:43.666905       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0601 18:57:43.666983       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0601 18:57:43.673667       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0601 18:57:43.673850       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-20220601115057-16804_0a75bbf8-8297-4952-afb0-d5692f3b65b7!
	I0601 18:57:43.673970       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6d8917ce-acb3-4188-9f13-0eda07809269", APIVersion:"v1", ResourceVersion:"513", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-20220601115057-16804_0a75bbf8-8297-4952-afb0-d5692f3b65b7 became leader
	I0601 18:57:43.776125       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-20220601115057-16804_0a75bbf8-8297-4952-afb0-d5692f3b65b7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220601115057-16804 -n no-preload-20220601115057-16804
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220601115057-16804 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-b955d9d8-kz2wj
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220601115057-16804 describe pod metrics-server-b955d9d8-kz2wj
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220601115057-16804 describe pod metrics-server-b955d9d8-kz2wj: exit status 1 (287.067802ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-b955d9d8-kz2wj" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220601115057-16804 describe pod metrics-server-b955d9d8-kz2wj: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220601115057-16804
helpers_test.go:235: (dbg) docker inspect no-preload-20220601115057-16804:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "640fdee1e7972f5863c1f9ee6da6b6baa2c98c8d612c746d3694bcbc653bfaf0",
	        "Created": "2022-06-01T18:50:59.851635845Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 208013,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T18:52:13.832116689Z",
	            "FinishedAt": "2022-06-01T18:52:11.806175726Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/640fdee1e7972f5863c1f9ee6da6b6baa2c98c8d612c746d3694bcbc653bfaf0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/640fdee1e7972f5863c1f9ee6da6b6baa2c98c8d612c746d3694bcbc653bfaf0/hostname",
	        "HostsPath": "/var/lib/docker/containers/640fdee1e7972f5863c1f9ee6da6b6baa2c98c8d612c746d3694bcbc653bfaf0/hosts",
	        "LogPath": "/var/lib/docker/containers/640fdee1e7972f5863c1f9ee6da6b6baa2c98c8d612c746d3694bcbc653bfaf0/640fdee1e7972f5863c1f9ee6da6b6baa2c98c8d612c746d3694bcbc653bfaf0-json.log",
	        "Name": "/no-preload-20220601115057-16804",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20220601115057-16804:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220601115057-16804",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/40bd3d957ee44f5492337fafff091d4e6fb20c62b70787d5fdbb2f62e561b608-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/40bd3d957ee44f5492337fafff091d4e6fb20c62b70787d5fdbb2f62e561b608/merged",
	                "UpperDir": "/var/lib/docker/overlay2/40bd3d957ee44f5492337fafff091d4e6fb20c62b70787d5fdbb2f62e561b608/diff",
	                "WorkDir": "/var/lib/docker/overlay2/40bd3d957ee44f5492337fafff091d4e6fb20c62b70787d5fdbb2f62e561b608/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220601115057-16804",
	                "Source": "/var/lib/docker/volumes/no-preload-20220601115057-16804/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220601115057-16804",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220601115057-16804",
	                "name.minikube.sigs.k8s.io": "no-preload-20220601115057-16804",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c26eec6a274e171d2c8c60c4d4901a86316f39a02fd9e47b1f2bd527076308d3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59705"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59706"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59707"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59708"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59709"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c26eec6a274e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220601115057-16804": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "640fdee1e797",
	                        "no-preload-20220601115057-16804"
	                    ],
	                    "NetworkID": "3a8d7e898b67819d09e7c626e20c10b519689f708220d091d47f03ea6749e9b3",
	                    "EndpointID": "acdb61ec9982dee0525dc6aefaae2ab513e16af32e41f91ff64319535a82f438",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220601115057-16804 -n no-preload-20220601115057-16804
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-20220601115057-16804 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p no-preload-20220601115057-16804 logs -n 25: (2.736653787s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-----------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                 Profile                 |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-----------------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p auto-20220601113004-16804                      | auto-20220601113004-16804               | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:45 PDT | 01 Jun 22 11:45 PDT |
	|         | pgrep -a kubelet                                  |                                         |         |                |                     |                     |
	| delete  | -p auto-20220601113004-16804                      | auto-20220601113004-16804               | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:45 PDT | 01 Jun 22 11:45 PDT |
	| start   | -p false-20220601113005-16804                     | false-20220601113005-16804              | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:44 PDT | 01 Jun 22 11:46 PDT |
	|         | --memory=2048                                     |                                         |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                     |                                         |         |                |                     |                     |
	|         | --wait-timeout=5m --cni=false                     |                                         |         |                |                     |                     |
	|         | --driver=docker                                   |                                         |         |                |                     |                     |
	| ssh     | -p false-20220601113005-16804                     | false-20220601113005-16804              | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:46 PDT | 01 Jun 22 11:46 PDT |
	|         | pgrep -a kubelet                                  |                                         |         |                |                     |                     |
	| start   | -p bridge-20220601113004-16804                    | bridge-20220601113004-16804             | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:45 PDT | 01 Jun 22 11:46 PDT |
	|         | --memory=2048                                     |                                         |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                     |                                         |         |                |                     |                     |
	|         | --wait-timeout=5m --cni=bridge                    |                                         |         |                |                     |                     |
	|         | --driver=docker                                   |                                         |         |                |                     |                     |
	| ssh     | -p bridge-20220601113004-16804                    | bridge-20220601113004-16804             | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:46 PDT | 01 Jun 22 11:46 PDT |
	|         | pgrep -a kubelet                                  |                                         |         |                |                     |                     |
	| delete  | -p false-20220601113005-16804                     | false-20220601113005-16804              | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:46 PDT | 01 Jun 22 11:46 PDT |
	| delete  | -p bridge-20220601113004-16804                    | bridge-20220601113004-16804             | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:46 PDT | 01 Jun 22 11:46 PDT |
	| start   | -p                                                | kubenet-20220601113004-16804            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:46 PDT | 01 Jun 22 11:47 PDT |
	|         | kubenet-20220601113004-16804                      |                                         |         |                |                     |                     |
	|         | --memory=2048                                     |                                         |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                         |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |                |                     |                     |
	|         | --network-plugin=kubenet                          |                                         |         |                |                     |                     |
	|         | --driver=docker                                   |                                         |         |                |                     |                     |
	| ssh     | -p                                                | kubenet-20220601113004-16804            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:47 PDT | 01 Jun 22 11:47 PDT |
	|         | kubenet-20220601113004-16804                      |                                         |         |                |                     |                     |
	|         | pgrep -a kubelet                                  |                                         |         |                |                     |                     |
	| delete  | -p                                                | kubenet-20220601113004-16804            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:48 PDT | 01 Jun 22 11:48 PDT |
	|         | kubenet-20220601113004-16804                      |                                         |         |                |                     |                     |
	| start   | -p                                                | enable-default-cni-20220601113004-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:46 PDT | 01 Jun 22 11:50 PDT |
	|         | enable-default-cni-20220601113004-16804           |                                         |         |                |                     |                     |
	|         | --memory=2048 --alsologtostderr                   |                                         |         |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |                |                     |                     |
	|         | --enable-default-cni=true                         |                                         |         |                |                     |                     |
	|         | --driver=docker                                   |                                         |         |                |                     |                     |
	| ssh     | -p                                                | enable-default-cni-20220601113004-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:50 PDT | 01 Jun 22 11:50 PDT |
	|         | enable-default-cni-20220601113004-16804           |                                         |         |                |                     |                     |
	|         | pgrep -a kubelet                                  |                                         |         |                |                     |                     |
	| delete  | -p                                                | enable-default-cni-20220601113004-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:50 PDT | 01 Jun 22 11:50 PDT |
	|         | enable-default-cni-20220601113004-16804           |                                         |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:50 PDT | 01 Jun 22 11:51 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |                |                     |                     |
	|         | --driver=docker                                   |                                         |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                         |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:51 PDT | 01 Jun 22 11:51 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |                |                     |                     |
	| stop    | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:51 PDT | 01 Jun 22 11:52 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |                |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:52 PDT | 01 Jun 22 11:52 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |                |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220601114806-16804    | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:53 PDT | 01 Jun 22 11:53 PDT |
	|         | old-k8s-version-20220601114806-16804              |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |                |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220601114806-16804    | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:53 PDT | 01 Jun 22 11:53 PDT |
	|         | old-k8s-version-20220601114806-16804              |                                         |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:52 PDT | 01 Jun 22 11:57 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |                |                     |                     |
	|         | --driver=docker                                   |                                         |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                         |         |                |                     |                     |
	| ssh     | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                         |         |                |                     |                     |
	| pause   | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |                |                     |                     |
	| unpause | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |                |                     |                     |
	| logs    | no-preload-20220601115057-16804                   | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | logs -n 25                                        |                                         |         |                |                     |                     |
	|---------|---------------------------------------------------|-----------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 11:53:49
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 11:53:49.869744   28319 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:53:49.870058   28319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:53:49.870063   28319 out.go:309] Setting ErrFile to fd 2...
	I0601 11:53:49.870067   28319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:53:49.870200   28319 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:53:49.870479   28319 out.go:303] Setting JSON to false
	I0601 11:53:49.885748   28319 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":8599,"bootTime":1654101030,"procs":364,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 11:53:49.885855   28319 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:53:49.907511   28319 out.go:177] * [old-k8s-version-20220601114806-16804] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 11:53:49.929263   28319 notify.go:193] Checking for updates...
	I0601 11:53:49.950161   28319 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:53:49.972303   28319 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:53:49.993555   28319 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 11:53:50.019203   28319 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:53:50.040605   28319 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:53:50.063270   28319 config.go:178] Loaded profile config "old-k8s-version-20220601114806-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0601 11:53:50.085267   28319 out.go:177] * Kubernetes 1.23.6 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.6
	I0601 11:53:50.106145   28319 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:53:50.179855   28319 docker.go:137] docker version: linux-20.10.14
	I0601 11:53:50.179965   28319 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:53:50.309210   28319 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 18:53:50.252610494 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:53:50.352718   28319 out.go:177] * Using the docker driver based on existing profile
	I0601 11:53:50.373802   28319 start.go:284] selected driver: docker
	I0601 11:53:50.373850   28319 start.go:806] validating driver "docker" against &{Name:old-k8s-version-20220601114806-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601114806-16804 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: M
ultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:53:50.374023   28319 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:53:50.377412   28319 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:53:50.505237   28319 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 18:53:50.450324655 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:53:50.505421   28319 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 11:53:50.505439   28319 cni.go:95] Creating CNI manager for ""
	I0601 11:53:50.505447   28319 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:53:50.505454   28319 start_flags.go:306] config:
	{Name:old-k8s-version-20220601114806-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601114806-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:53:50.527438   28319 out.go:177] * Starting control plane node old-k8s-version-20220601114806-16804 in cluster old-k8s-version-20220601114806-16804
	I0601 11:53:50.548947   28319 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 11:53:50.570241   28319 out.go:177] * Pulling base image ...
	I0601 11:53:50.613151   28319 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 11:53:50.613177   28319 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 11:53:50.613243   28319 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0601 11:53:50.613268   28319 cache.go:57] Caching tarball of preloaded images
	I0601 11:53:50.613461   28319 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:53:50.613486   28319 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0601 11:53:50.614580   28319 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/config.json ...
	I0601 11:53:50.680684   28319 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 11:53:50.680699   28319 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 11:53:50.680708   28319 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:53:50.680756   28319 start.go:352] acquiring machines lock for old-k8s-version-20220601114806-16804: {Name:mke97f71f3781c3324662a5c4576dc1a6ff166e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:53:50.680837   28319 start.go:356] acquired machines lock for "old-k8s-version-20220601114806-16804" in 61.411µs
	I0601 11:53:50.680855   28319 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:53:50.680865   28319 fix.go:55] fixHost starting: 
	I0601 11:53:50.681120   28319 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601114806-16804 --format={{.State.Status}}
	I0601 11:53:50.749601   28319 fix.go:103] recreateIfNeeded on old-k8s-version-20220601114806-16804: state=Stopped err=<nil>
	W0601 11:53:50.749634   28319 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 11:53:50.771624   28319 out.go:177] * Restarting existing docker container for "old-k8s-version-20220601114806-16804" ...
	I0601 11:53:47.937151   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:53:50.415910   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:53:50.793636   28319 cli_runner.go:164] Run: docker start old-k8s-version-20220601114806-16804
	I0601 11:53:51.159654   28319 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220601114806-16804 --format={{.State.Status}}
	I0601 11:53:51.244535   28319 kic.go:416] container "old-k8s-version-20220601114806-16804" state is running.
	I0601 11:53:51.245201   28319 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220601114806-16804
	I0601 11:53:51.377956   28319 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/config.json ...
	I0601 11:53:51.378362   28319 machine.go:88] provisioning docker machine ...
	I0601 11:53:51.378386   28319 ubuntu.go:169] provisioning hostname "old-k8s-version-20220601114806-16804"
	I0601 11:53:51.378453   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:51.457140   28319 main.go:134] libmachine: Using SSH client type: native
	I0601 11:53:51.457343   28319 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 59947 <nil> <nil>}
	I0601 11:53:51.457358   28319 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220601114806-16804 && echo "old-k8s-version-20220601114806-16804" | sudo tee /etc/hostname
	I0601 11:53:51.580646   28319 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220601114806-16804
	
	I0601 11:53:51.580749   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:51.656628   28319 main.go:134] libmachine: Using SSH client type: native
	I0601 11:53:51.656782   28319 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 59947 <nil> <nil>}
	I0601 11:53:51.656796   28319 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220601114806-16804' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220601114806-16804/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220601114806-16804' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 11:53:51.776288   28319 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:53:51.776311   28319 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 11:53:51.776328   28319 ubuntu.go:177] setting up certificates
	I0601 11:53:51.776340   28319 provision.go:83] configureAuth start
	I0601 11:53:51.776419   28319 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220601114806-16804
	I0601 11:53:51.850151   28319 provision.go:138] copyHostCerts
	I0601 11:53:51.850269   28319 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 11:53:51.850278   28319 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 11:53:51.850366   28319 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 11:53:51.850623   28319 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 11:53:51.850633   28319 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 11:53:51.850695   28319 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 11:53:51.850828   28319 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 11:53:51.850834   28319 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 11:53:51.850894   28319 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1675 bytes)
	I0601 11:53:51.851013   28319 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220601114806-16804 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220601114806-16804]
	I0601 11:53:51.901708   28319 provision.go:172] copyRemoteCerts
	I0601 11:53:51.901767   28319 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 11:53:51.901818   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:51.975877   28319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59947 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601114806-16804/id_rsa Username:docker}
	I0601 11:53:52.060009   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0601 11:53:52.077110   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 11:53:52.093871   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0601 11:53:52.110974   28319 provision.go:86] duration metric: configureAuth took 334.623818ms
	I0601 11:53:52.110987   28319 ubuntu.go:193] setting minikube options for container-runtime
	I0601 11:53:52.111171   28319 config.go:178] Loaded profile config "old-k8s-version-20220601114806-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0601 11:53:52.111232   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:52.184299   28319 main.go:134] libmachine: Using SSH client type: native
	I0601 11:53:52.184438   28319 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 59947 <nil> <nil>}
	I0601 11:53:52.184448   28319 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 11:53:52.302847   28319 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 11:53:52.302863   28319 ubuntu.go:71] root file system type: overlay
	I0601 11:53:52.303018   28319 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 11:53:52.303102   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:52.376389   28319 main.go:134] libmachine: Using SSH client type: native
	I0601 11:53:52.376552   28319 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 59947 <nil> <nil>}
	I0601 11:53:52.376603   28319 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 11:53:52.502277   28319 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 11:53:52.502373   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:52.575586   28319 main.go:134] libmachine: Using SSH client type: native
	I0601 11:53:52.575726   28319 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 59947 <nil> <nil>}
	I0601 11:53:52.575739   28319 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 11:53:52.696095   28319 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:53:52.696111   28319 machine.go:91] provisioned docker machine in 1.317750791s
	I0601 11:53:52.696121   28319 start.go:306] post-start starting for "old-k8s-version-20220601114806-16804" (driver="docker")
	I0601 11:53:52.696125   28319 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 11:53:52.696189   28319 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 11:53:52.696241   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:52.769932   28319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59947 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601114806-16804/id_rsa Username:docker}
	I0601 11:53:52.855461   28319 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 11:53:52.859028   28319 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 11:53:52.859043   28319 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 11:53:52.859052   28319 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 11:53:52.859056   28319 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 11:53:52.859064   28319 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 11:53:52.859169   28319 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 11:53:52.859314   28319 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem -> 168042.pem in /etc/ssl/certs
	I0601 11:53:52.859492   28319 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 11:53:52.866875   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /etc/ssl/certs/168042.pem (1708 bytes)
	I0601 11:53:52.884313   28319 start.go:309] post-start completed in 188.184945ms
	I0601 11:53:52.884426   28319 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:53:52.884507   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:52.959492   28319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59947 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601114806-16804/id_rsa Username:docker}
	I0601 11:53:53.043087   28319 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 11:53:53.047543   28319 fix.go:57] fixHost completed within 2.366693794s
	I0601 11:53:53.047555   28319 start.go:81] releasing machines lock for "old-k8s-version-20220601114806-16804", held for 2.366727273s
	I0601 11:53:53.047629   28319 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220601114806-16804
	I0601 11:53:53.121099   28319 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 11:53:53.121221   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:53.121364   28319 ssh_runner.go:195] Run: systemctl --version
	I0601 11:53:53.121966   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:53.202586   28319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59947 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601114806-16804/id_rsa Username:docker}
	I0601 11:53:53.205983   28319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59947 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/old-k8s-version-20220601114806-16804/id_rsa Username:docker}
	I0601 11:53:53.287975   28319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 11:53:53.422168   28319 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 11:53:53.432821   28319 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 11:53:53.432877   28319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 11:53:53.443234   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 11:53:53.456386   28319 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 11:53:53.525203   28319 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 11:53:53.595305   28319 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 11:53:53.605613   28319 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 11:53:53.677054   28319 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 11:53:53.687222   28319 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 11:53:53.721998   28319 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 11:53:53.799095   28319 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.16 ...
	I0601 11:53:53.799216   28319 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220601114806-16804 dig +short host.docker.internal
	I0601 11:53:53.940925   28319 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 11:53:53.941045   28319 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 11:53:53.945523   28319 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:53:53.955094   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:54.028140   28319 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 11:53:54.028206   28319 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 11:53:54.058427   28319 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0601 11:53:54.058444   28319 docker.go:541] Images already preloaded, skipping extraction
	I0601 11:53:54.058545   28319 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 11:53:54.088697   28319 docker.go:610] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0601 11:53:54.088719   28319 cache_images.go:84] Images are preloaded, skipping loading
	I0601 11:53:54.088807   28319 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 11:53:54.166463   28319 cni.go:95] Creating CNI manager for ""
	I0601 11:53:54.166476   28319 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:53:54.166488   28319 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 11:53:54.166502   28319 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220601114806-16804 NodeName:old-k8s-version-20220601114806-16804 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 11:53:54.166740   28319 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220601114806-16804"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220601114806-16804
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.49.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 11:53:54.166870   28319 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220601114806-16804 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601114806-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 11:53:54.166970   28319 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0601 11:53:54.175057   28319 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 11:53:54.175168   28319 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 11:53:54.182581   28319 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0601 11:53:54.195344   28319 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 11:53:54.209271   28319 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0601 11:53:54.222455   28319 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0601 11:53:54.226242   28319 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 11:53:54.235793   28319 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804 for IP: 192.168.49.2
	I0601 11:53:54.236026   28319 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 11:53:54.236076   28319 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 11:53:54.236166   28319 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/client.key
	I0601 11:53:54.236237   28319 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/apiserver.key.dd3b5fb2
	I0601 11:53:54.236290   28319 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/proxy-client.key
	I0601 11:53:54.236516   28319 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem (1338 bytes)
	W0601 11:53:54.236567   28319 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804_empty.pem, impossibly tiny 0 bytes
	I0601 11:53:54.236582   28319 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1675 bytes)
	I0601 11:53:54.236627   28319 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 11:53:54.236663   28319 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 11:53:54.236693   28319 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1675 bytes)
	I0601 11:53:54.236758   28319 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem (1708 bytes)
	I0601 11:53:54.237319   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 11:53:54.255312   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0601 11:53:54.273877   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 11:53:54.292370   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601114806-16804/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 11:53:54.309832   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 11:53:54.326977   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 11:53:54.344196   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 11:53:54.362336   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0601 11:53:54.379964   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /usr/share/ca-certificates/168042.pem (1708 bytes)
	I0601 11:53:54.397530   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 11:53:54.417711   28319 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem --> /usr/share/ca-certificates/16804.pem (1338 bytes)
	I0601 11:53:54.437491   28319 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 11:53:54.450542   28319 ssh_runner.go:195] Run: openssl version
	I0601 11:53:54.456042   28319 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 11:53:54.464269   28319 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:53:54.468369   28319 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:53:54.468417   28319 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:53:54.473721   28319 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 11:53:54.481064   28319 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16804.pem && ln -fs /usr/share/ca-certificates/16804.pem /etc/ssl/certs/16804.pem"
	I0601 11:53:54.489014   28319 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16804.pem
	I0601 11:53:54.493352   28319 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 18:01 /usr/share/ca-certificates/16804.pem
	I0601 11:53:54.493405   28319 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16804.pem
	I0601 11:53:54.498751   28319 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16804.pem /etc/ssl/certs/51391683.0"
	I0601 11:53:54.506172   28319 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168042.pem && ln -fs /usr/share/ca-certificates/168042.pem /etc/ssl/certs/168042.pem"
	I0601 11:53:54.514267   28319 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168042.pem
	I0601 11:53:54.518553   28319 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 18:01 /usr/share/ca-certificates/168042.pem
	I0601 11:53:54.518598   28319 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168042.pem
	I0601 11:53:54.523963   28319 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168042.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 11:53:54.531759   28319 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220601114806-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220601114806-16804 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fa
lse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:53:54.531914   28319 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 11:53:54.560485   28319 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 11:53:54.568453   28319 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 11:53:54.568470   28319 kubeadm.go:626] restartCluster start
	I0601 11:53:54.568526   28319 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 11:53:54.576181   28319 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:54.576234   28319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220601114806-16804
	I0601 11:53:54.648876   28319 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220601114806-16804" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:53:54.649065   28319 kubeconfig.go:127] "old-k8s-version-20220601114806-16804" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 11:53:54.649419   28319 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk924f4ba24fa74a0cb052299e0cc4e825b209a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:53:54.650792   28319 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 11:53:54.658693   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:54.658754   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:54.667668   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:54.867864   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:54.868016   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:53:52.915439   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:53:54.916282   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:53:57.416935   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	W0601 11:53:54.878565   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:55.067861   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:55.068061   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:55.078749   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:55.267872   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:55.267970   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:55.277798   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:55.467808   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:55.468001   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:55.478316   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:55.668830   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:55.668990   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:55.679581   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:55.867820   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:55.867886   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:55.877012   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:56.067800   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:56.067905   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:56.078888   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:56.268000   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:56.268155   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:56.280256   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:56.469870   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:56.470054   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:56.480670   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:56.668044   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:56.668248   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:56.678758   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:56.869784   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:56.870011   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:56.881309   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:57.068003   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:57.068108   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:57.078632   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:57.268862   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:57.269009   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:57.279785   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:57.467744   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:57.467859   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:57.476668   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:57.669778   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:57.669940   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:57.680383   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:57.680392   28319 api_server.go:165] Checking apiserver status ...
	I0601 11:53:57.680428   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:53:57.688734   28319 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:53:57.688748   28319 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 11:53:57.688756   28319 kubeadm.go:1092] stopping kube-system containers ...
	I0601 11:53:57.688806   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 11:53:57.716946   28319 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 11:53:57.727312   28319 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:53:57.734908   28319 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5747 Jun  1 18:50 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5783 Jun  1 18:50 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5931 Jun  1 18:50 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5735 Jun  1 18:50 /etc/kubernetes/scheduler.conf
	
	I0601 11:53:57.734963   28319 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0601 11:53:57.742318   28319 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0601 11:53:57.749223   28319 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0601 11:53:57.756324   28319 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0601 11:53:57.763812   28319 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:53:57.771443   28319 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 11:53:57.771471   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:53:57.824342   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:53:58.674608   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:53:58.883641   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:53:58.947348   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:53:59.001013   28319 api_server.go:51] waiting for apiserver process to appear ...
	I0601 11:53:59.001108   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:53:59.510767   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:53:59.916784   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:01.917518   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:00.009647   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:00.509747   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:01.010150   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:01.509684   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:02.010421   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:02.509629   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:03.010849   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:03.509597   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:04.010617   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:04.509864   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:04.417417   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:06.915839   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:05.009626   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:05.510122   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:06.011243   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:06.509597   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:07.010075   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:07.510735   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:08.009752   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:08.510521   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:09.011821   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:09.509668   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:09.416812   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:11.916325   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:10.009948   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:10.510847   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:11.009616   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:11.511800   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:12.011078   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:12.509781   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:13.010426   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:13.511504   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:14.009773   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:14.511892   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:13.917075   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:16.417004   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:15.009733   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:15.509887   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:16.009785   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:16.509980   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:17.010719   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:17.510131   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:18.010694   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:18.509925   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:19.009913   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:19.509819   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:18.418120   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:20.915536   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:20.010244   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:20.511718   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:21.009981   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:21.511674   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:22.010072   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:22.510782   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:23.010358   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:23.510119   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:24.010784   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:24.510053   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:22.915622   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:25.416214   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:25.010176   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:25.509875   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:26.010334   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:26.509928   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:27.011901   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:27.510111   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:28.010803   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:28.511923   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:29.009812   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:29.510817   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:27.917006   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:29.917023   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:32.412894   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:30.009917   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:30.509902   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:31.009955   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:31.510015   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:32.009897   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:32.511855   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:33.010119   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:33.509814   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:34.009927   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:34.510142   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:34.413724   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:36.916224   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:35.009839   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:35.510508   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:36.011637   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:36.510196   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:37.011880   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:37.510089   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:38.009692   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:38.511810   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:39.011487   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:39.510121   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:39.413982   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:41.915154   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:40.009747   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:40.510936   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:41.009982   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:41.511810   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:42.009813   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:42.509738   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:43.009671   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:43.510070   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:44.010000   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:44.510019   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:43.915607   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:46.415307   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:45.011452   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:45.510016   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:46.011805   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:46.511096   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:47.010260   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:47.511556   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:48.011623   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:48.510043   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:49.010213   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:49.511714   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:48.918361   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:51.415273   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:50.010714   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:50.510086   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:51.010435   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:51.509903   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:52.011713   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:52.511717   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:53.010672   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:53.510554   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:54.011736   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:54.510455   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:53.415382   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:55.915589   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:54:55.009677   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:55.511743   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:56.010494   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:56.510375   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:57.009595   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:57.510546   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:58.009763   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:58.510692   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:54:59.010031   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:54:59.041359   28319 logs.go:274] 0 containers: []
	W0601 11:54:59.041374   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:54:59.041433   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:54:59.070260   28319 logs.go:274] 0 containers: []
	W0601 11:54:59.070272   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:54:59.070335   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:54:59.100026   28319 logs.go:274] 0 containers: []
	W0601 11:54:59.100038   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:54:59.100092   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:54:59.130410   28319 logs.go:274] 0 containers: []
	W0601 11:54:59.130422   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:54:59.130489   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:54:59.161102   28319 logs.go:274] 0 containers: []
	W0601 11:54:59.161116   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:54:59.161174   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:54:59.190924   28319 logs.go:274] 0 containers: []
	W0601 11:54:59.190935   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:54:59.190999   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:54:59.220657   28319 logs.go:274] 0 containers: []
	W0601 11:54:59.220668   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:54:59.220727   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:54:59.249159   28319 logs.go:274] 0 containers: []
	W0601 11:54:59.249172   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:54:59.249178   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:54:59.249185   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:54:59.261384   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:54:59.261396   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:54:59.314775   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:54:59.314790   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:54:59.314813   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:54:59.327098   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:54:59.327111   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:54:57.916340   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:00.413721   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:01.380143   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053042018s)
	I0601 11:55:01.380273   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:01.380280   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:03.922143   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:04.010905   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:04.040989   28319 logs.go:274] 0 containers: []
	W0601 11:55:04.041000   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:04.041053   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:04.068936   28319 logs.go:274] 0 containers: []
	W0601 11:55:04.068948   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:04.069005   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:04.097959   28319 logs.go:274] 0 containers: []
	W0601 11:55:04.097971   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:04.098033   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:04.126721   28319 logs.go:274] 0 containers: []
	W0601 11:55:04.126734   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:04.126798   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:04.159225   28319 logs.go:274] 0 containers: []
	W0601 11:55:04.159236   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:04.159294   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:04.190775   28319 logs.go:274] 0 containers: []
	W0601 11:55:04.190816   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:04.190876   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:04.221251   28319 logs.go:274] 0 containers: []
	W0601 11:55:04.221264   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:04.221323   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:04.252908   28319 logs.go:274] 0 containers: []
	W0601 11:55:04.252955   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:04.252962   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:04.252973   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:04.295721   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:55:04.295735   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:55:04.307860   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:55:04.307873   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:55:04.362481   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:55:04.362494   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:55:04.362502   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:55:04.374612   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:04.374623   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:02.916832   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:05.414833   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:07.416272   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:06.432720   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058108483s)
	I0601 11:55:08.935099   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:09.011533   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:09.042307   28319 logs.go:274] 0 containers: []
	W0601 11:55:09.042320   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:09.042373   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:09.071674   28319 logs.go:274] 0 containers: []
	W0601 11:55:09.071686   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:09.071752   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:09.100500   28319 logs.go:274] 0 containers: []
	W0601 11:55:09.100516   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:09.100572   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:09.129557   28319 logs.go:274] 0 containers: []
	W0601 11:55:09.129568   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:09.129632   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:09.159131   28319 logs.go:274] 0 containers: []
	W0601 11:55:09.159144   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:09.159198   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:09.188211   28319 logs.go:274] 0 containers: []
	W0601 11:55:09.188224   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:09.188282   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:09.218887   28319 logs.go:274] 0 containers: []
	W0601 11:55:09.218900   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:09.218955   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:09.248189   28319 logs.go:274] 0 containers: []
	W0601 11:55:09.248204   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:09.248212   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:09.248220   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:09.292398   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:55:09.292412   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:55:09.305043   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:55:09.305056   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:55:09.358584   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:55:09.358623   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:55:09.358646   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:55:09.371613   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:09.371625   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:09.914468   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:11.915325   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:11.427594   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055980689s)
	I0601 11:55:13.928572   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:14.011456   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:14.041396   28319 logs.go:274] 0 containers: []
	W0601 11:55:14.041409   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:14.041466   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:14.069221   28319 logs.go:274] 0 containers: []
	W0601 11:55:14.069233   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:14.069300   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:14.098018   28319 logs.go:274] 0 containers: []
	W0601 11:55:14.098031   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:14.098087   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:14.128468   28319 logs.go:274] 0 containers: []
	W0601 11:55:14.128480   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:14.128538   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:14.162047   28319 logs.go:274] 0 containers: []
	W0601 11:55:14.162059   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:14.162114   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:14.195633   28319 logs.go:274] 0 containers: []
	W0601 11:55:14.195647   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:14.195716   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:14.224730   28319 logs.go:274] 0 containers: []
	W0601 11:55:14.224743   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:14.224796   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:14.255413   28319 logs.go:274] 0 containers: []
	W0601 11:55:14.255426   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:14.255449   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:14.255456   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:14.297925   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:55:14.297938   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:55:14.311464   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:55:14.311477   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:55:14.363749   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:55:14.363759   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:55:14.363766   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:55:14.377049   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:14.377063   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:13.916812   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:16.413640   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:16.431836   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054784141s)
	I0601 11:55:18.932093   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:19.009576   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:19.039961   28319 logs.go:274] 0 containers: []
	W0601 11:55:19.039974   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:19.040032   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:19.069166   28319 logs.go:274] 0 containers: []
	W0601 11:55:19.069178   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:19.069234   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:19.097392   28319 logs.go:274] 0 containers: []
	W0601 11:55:19.097405   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:19.097468   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:19.128648   28319 logs.go:274] 0 containers: []
	W0601 11:55:19.128660   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:19.128716   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:19.158222   28319 logs.go:274] 0 containers: []
	W0601 11:55:19.158235   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:19.158294   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:19.188141   28319 logs.go:274] 0 containers: []
	W0601 11:55:19.188155   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:19.188209   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:19.219575   28319 logs.go:274] 0 containers: []
	W0601 11:55:19.219588   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:19.219654   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:19.253005   28319 logs.go:274] 0 containers: []
	W0601 11:55:19.253019   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:19.253026   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:55:19.253035   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:55:19.266133   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:19.266149   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:18.916745   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:21.413196   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:21.320131   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053993397s)
	I0601 11:55:21.320234   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:21.320240   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:21.361727   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:55:21.361740   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:55:21.375163   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:55:21.375177   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:55:21.432802   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:55:23.934258   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:24.009921   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:24.040408   28319 logs.go:274] 0 containers: []
	W0601 11:55:24.040420   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:24.040476   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:24.068603   28319 logs.go:274] 0 containers: []
	W0601 11:55:24.068615   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:24.068673   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:24.097572   28319 logs.go:274] 0 containers: []
	W0601 11:55:24.097584   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:24.097641   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:24.127008   28319 logs.go:274] 0 containers: []
	W0601 11:55:24.127020   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:24.127083   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:24.157041   28319 logs.go:274] 0 containers: []
	W0601 11:55:24.157054   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:24.157117   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:24.186748   28319 logs.go:274] 0 containers: []
	W0601 11:55:24.186761   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:24.186819   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:24.215933   28319 logs.go:274] 0 containers: []
	W0601 11:55:24.215946   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:24.216013   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:24.247816   28319 logs.go:274] 0 containers: []
	W0601 11:55:24.247829   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:24.247836   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:55:24.247843   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:55:24.260281   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:24.260293   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:23.414008   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:25.913520   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:26.315423   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055142929s)
	I0601 11:55:26.315530   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:26.315537   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:26.354821   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:55:26.354835   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:55:26.369903   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:55:26.369926   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:55:26.426327   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:55:28.926931   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:29.009389   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:29.040058   28319 logs.go:274] 0 containers: []
	W0601 11:55:29.040071   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:29.040129   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:29.068341   28319 logs.go:274] 0 containers: []
	W0601 11:55:29.068353   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:29.068410   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:29.098806   28319 logs.go:274] 0 containers: []
	W0601 11:55:29.098817   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:29.098876   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:29.128428   28319 logs.go:274] 0 containers: []
	W0601 11:55:29.128462   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:29.128520   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:29.158686   28319 logs.go:274] 0 containers: []
	W0601 11:55:29.158725   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:29.158785   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:29.188284   28319 logs.go:274] 0 containers: []
	W0601 11:55:29.188295   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:29.188348   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:29.217778   28319 logs.go:274] 0 containers: []
	W0601 11:55:29.217791   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:29.217855   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:29.247459   28319 logs.go:274] 0 containers: []
	W0601 11:55:29.247472   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:29.247479   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:29.247485   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:29.290765   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:55:29.290780   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:55:29.302626   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:55:29.302638   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:55:29.356128   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:55:29.356140   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:55:29.356147   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:55:29.369506   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:29.369522   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:27.915099   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:30.413362   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:31.427130   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057620396s)
	I0601 11:55:33.928625   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:34.009592   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:34.039227   28319 logs.go:274] 0 containers: []
	W0601 11:55:34.039241   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:34.039301   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:34.068316   28319 logs.go:274] 0 containers: []
	W0601 11:55:34.068329   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:34.068388   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:34.097349   28319 logs.go:274] 0 containers: []
	W0601 11:55:34.097360   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:34.097414   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:34.127402   28319 logs.go:274] 0 containers: []
	W0601 11:55:34.127415   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:34.127473   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:34.158010   28319 logs.go:274] 0 containers: []
	W0601 11:55:34.158023   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:34.158091   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:34.189587   28319 logs.go:274] 0 containers: []
	W0601 11:55:34.189604   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:34.189668   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:34.219589   28319 logs.go:274] 0 containers: []
	W0601 11:55:34.219601   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:34.219659   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:34.251097   28319 logs.go:274] 0 containers: []
	W0601 11:55:34.251111   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:34.251118   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:34.251125   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:34.294366   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:55:34.294381   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:55:34.306716   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:55:34.306749   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:55:34.365768   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:55:34.365779   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:55:34.365789   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:55:34.378842   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:34.378855   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:32.914832   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:35.414524   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:37.415451   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:36.434298   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055455813s)
	I0601 11:55:38.936699   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:39.009065   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:39.038697   28319 logs.go:274] 0 containers: []
	W0601 11:55:39.038710   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:39.038765   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:39.067921   28319 logs.go:274] 0 containers: []
	W0601 11:55:39.067933   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:39.067992   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:39.098440   28319 logs.go:274] 0 containers: []
	W0601 11:55:39.098452   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:39.098516   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:39.127326   28319 logs.go:274] 0 containers: []
	W0601 11:55:39.127338   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:39.127408   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:39.156250   28319 logs.go:274] 0 containers: []
	W0601 11:55:39.156261   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:39.156319   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:39.185946   28319 logs.go:274] 0 containers: []
	W0601 11:55:39.185958   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:39.186014   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:39.215610   28319 logs.go:274] 0 containers: []
	W0601 11:55:39.215622   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:39.215687   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:39.245933   28319 logs.go:274] 0 containers: []
	W0601 11:55:39.245945   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:39.245952   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:39.245958   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:39.288218   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:55:39.288232   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:55:39.300049   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:55:39.300062   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:55:39.353082   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:55:39.353099   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:55:39.353107   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:55:39.368530   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:39.368544   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:39.916089   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:42.413720   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:41.423732   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055201327s)
	I0601 11:55:43.924128   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:44.009307   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:44.039683   28319 logs.go:274] 0 containers: []
	W0601 11:55:44.039695   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:44.039751   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:44.067842   28319 logs.go:274] 0 containers: []
	W0601 11:55:44.067855   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:44.067913   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:44.097345   28319 logs.go:274] 0 containers: []
	W0601 11:55:44.097361   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:44.097434   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:44.127436   28319 logs.go:274] 0 containers: []
	W0601 11:55:44.127448   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:44.127503   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:44.156091   28319 logs.go:274] 0 containers: []
	W0601 11:55:44.156109   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:44.156164   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:44.185928   28319 logs.go:274] 0 containers: []
	W0601 11:55:44.185961   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:44.186024   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:44.214767   28319 logs.go:274] 0 containers: []
	W0601 11:55:44.214779   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:44.214838   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:44.245949   28319 logs.go:274] 0 containers: []
	W0601 11:55:44.245962   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:44.245968   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:44.245975   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:44.287811   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:55:44.287825   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:55:44.300341   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:55:44.300374   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:55:44.358385   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:55:44.358412   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:55:44.358420   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:55:44.371801   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:44.371813   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:44.415399   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:46.913440   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:46.428143   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056342162s)
	I0601 11:55:48.928399   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:49.009439   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:49.041219   28319 logs.go:274] 0 containers: []
	W0601 11:55:49.041231   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:49.041298   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:49.070249   28319 logs.go:274] 0 containers: []
	W0601 11:55:49.070261   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:49.070314   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:49.099733   28319 logs.go:274] 0 containers: []
	W0601 11:55:49.099745   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:49.099810   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:49.129069   28319 logs.go:274] 0 containers: []
	W0601 11:55:49.129087   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:49.129156   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:49.160580   28319 logs.go:274] 0 containers: []
	W0601 11:55:49.160592   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:49.160649   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:49.191907   28319 logs.go:274] 0 containers: []
	W0601 11:55:49.191927   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:49.192017   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:49.224082   28319 logs.go:274] 0 containers: []
	W0601 11:55:49.224094   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:49.224150   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:49.253092   28319 logs.go:274] 0 containers: []
	W0601 11:55:49.253105   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:49.253112   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:49.253119   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:49.296708   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:55:49.296724   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:55:49.308993   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:55:49.309005   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:55:49.362195   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:55:49.362213   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:55:49.362221   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:55:49.375504   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:49.375515   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:48.916194   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:51.413300   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:51.430612   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055108694s)
	I0601 11:55:53.931474   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:54.010786   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:54.041202   28319 logs.go:274] 0 containers: []
	W0601 11:55:54.041214   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:54.041269   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:54.070844   28319 logs.go:274] 0 containers: []
	W0601 11:55:54.070858   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:54.070913   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:54.100345   28319 logs.go:274] 0 containers: []
	W0601 11:55:54.100358   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:54.100429   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:54.135095   28319 logs.go:274] 0 containers: []
	W0601 11:55:54.135108   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:54.135161   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:54.164057   28319 logs.go:274] 0 containers: []
	W0601 11:55:54.164070   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:54.164163   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:54.194214   28319 logs.go:274] 0 containers: []
	W0601 11:55:54.194226   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:54.194283   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:54.224549   28319 logs.go:274] 0 containers: []
	W0601 11:55:54.224563   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:54.224617   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:54.253713   28319 logs.go:274] 0 containers: []
	W0601 11:55:54.253725   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:54.253732   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:55:54.253741   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:55:54.296231   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:55:54.296245   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:55:54.309155   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:55:54.309170   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:55:54.367180   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:55:54.367192   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:55:54.367202   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:55:54.380905   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:54.380918   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:53.416659   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:55.913796   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:55:56.441742   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060835682s)
	I0601 11:55:58.942261   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:55:59.010922   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:55:59.041572   28319 logs.go:274] 0 containers: []
	W0601 11:55:59.041586   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:55:59.041646   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:55:59.071435   28319 logs.go:274] 0 containers: []
	W0601 11:55:59.071447   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:55:59.071510   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:55:59.102114   28319 logs.go:274] 0 containers: []
	W0601 11:55:59.102126   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:55:59.102180   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:55:59.131205   28319 logs.go:274] 0 containers: []
	W0601 11:55:59.131218   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:55:59.131290   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:55:59.161117   28319 logs.go:274] 0 containers: []
	W0601 11:55:59.161144   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:55:59.161199   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:55:59.192225   28319 logs.go:274] 0 containers: []
	W0601 11:55:59.192237   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:55:59.192291   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:55:59.222459   28319 logs.go:274] 0 containers: []
	W0601 11:55:59.222472   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:55:59.222526   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:55:59.252831   28319 logs.go:274] 0 containers: []
	W0601 11:55:59.252844   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:55:59.252851   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:55:59.252859   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:55:58.415467   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:00.915615   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:01.309035   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056190082s)
	I0601 11:56:01.309146   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:01.309153   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:01.351333   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:01.351348   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:01.363658   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:01.363670   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:01.419248   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:01.419262   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:01.419269   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:03.932268   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:04.010915   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:04.041445   28319 logs.go:274] 0 containers: []
	W0601 11:56:04.041457   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:04.041511   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:04.071011   28319 logs.go:274] 0 containers: []
	W0601 11:56:04.071024   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:04.071085   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:04.104002   28319 logs.go:274] 0 containers: []
	W0601 11:56:04.104013   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:04.104077   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:04.134006   28319 logs.go:274] 0 containers: []
	W0601 11:56:04.134019   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:04.134100   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:04.164966   28319 logs.go:274] 0 containers: []
	W0601 11:56:04.164980   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:04.165051   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:04.195574   28319 logs.go:274] 0 containers: []
	W0601 11:56:04.195585   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:04.195641   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:04.226690   28319 logs.go:274] 0 containers: []
	W0601 11:56:04.226702   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:04.226761   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:04.255356   28319 logs.go:274] 0 containers: []
	W0601 11:56:04.255369   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:04.255376   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:04.255397   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:04.299830   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:04.299845   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:04.311638   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:04.311650   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:04.366259   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:04.366299   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:04.366307   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:04.379569   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:04.379580   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:03.413050   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:05.414664   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:07.414801   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:06.441255   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.061688435s)
	I0601 11:56:08.942583   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:09.010888   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:09.041505   28319 logs.go:274] 0 containers: []
	W0601 11:56:09.041516   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:09.041582   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:09.069955   28319 logs.go:274] 0 containers: []
	W0601 11:56:09.069968   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:09.070020   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:09.100291   28319 logs.go:274] 0 containers: []
	W0601 11:56:09.100302   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:09.100355   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:09.128780   28319 logs.go:274] 0 containers: []
	W0601 11:56:09.128791   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:09.128844   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:09.158028   28319 logs.go:274] 0 containers: []
	W0601 11:56:09.158040   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:09.158100   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:09.188003   28319 logs.go:274] 0 containers: []
	W0601 11:56:09.188016   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:09.188071   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:09.217250   28319 logs.go:274] 0 containers: []
	W0601 11:56:09.217263   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:09.217335   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:09.247404   28319 logs.go:274] 0 containers: []
	W0601 11:56:09.247416   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:09.247423   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:09.247430   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:09.291646   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:09.291660   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:09.303726   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:09.303737   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:09.359404   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:09.359416   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:09.359423   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:09.372338   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:09.372352   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:09.914692   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:11.914813   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:11.438025   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.065685968s)
	I0601 11:56:13.938356   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:14.010482   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:14.042655   28319 logs.go:274] 0 containers: []
	W0601 11:56:14.042666   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:14.042721   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:14.073307   28319 logs.go:274] 0 containers: []
	W0601 11:56:14.073335   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:14.073392   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:14.103025   28319 logs.go:274] 0 containers: []
	W0601 11:56:14.103036   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:14.103091   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:14.132511   28319 logs.go:274] 0 containers: []
	W0601 11:56:14.132524   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:14.132583   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:14.162337   28319 logs.go:274] 0 containers: []
	W0601 11:56:14.162349   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:14.162404   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:14.192882   28319 logs.go:274] 0 containers: []
	W0601 11:56:14.192896   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:14.192952   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:14.222438   28319 logs.go:274] 0 containers: []
	W0601 11:56:14.222451   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:14.222506   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:14.252850   28319 logs.go:274] 0 containers: []
	W0601 11:56:14.252863   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:14.252871   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:14.252878   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:14.265274   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:14.265300   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:14.413751   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:16.913656   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:16.319655   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.0543632s)
	I0601 11:56:16.319773   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:16.319781   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:16.360376   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:16.360390   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:16.373260   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:16.373293   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:16.428799   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:18.930318   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:19.010706   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:19.041493   28319 logs.go:274] 0 containers: []
	W0601 11:56:19.041505   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:19.041566   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:19.071367   28319 logs.go:274] 0 containers: []
	W0601 11:56:19.071377   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:19.071438   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:19.102204   28319 logs.go:274] 0 containers: []
	W0601 11:56:19.102217   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:19.102273   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:19.134887   28319 logs.go:274] 0 containers: []
	W0601 11:56:19.134899   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:19.134960   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:19.165401   28319 logs.go:274] 0 containers: []
	W0601 11:56:19.165414   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:19.165481   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:19.199809   28319 logs.go:274] 0 containers: []
	W0601 11:56:19.199820   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:19.199917   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:19.231653   28319 logs.go:274] 0 containers: []
	W0601 11:56:19.231665   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:19.231722   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:19.261391   28319 logs.go:274] 0 containers: []
	W0601 11:56:19.261403   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:19.261410   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:19.261416   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:19.304944   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:19.304958   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:19.316813   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:19.316825   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:19.372616   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:19.372627   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:19.372633   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:19.385307   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:19.385318   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:18.913944   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:20.915285   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:21.446084   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060778195s)
	I0601 11:56:23.946972   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:24.009400   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:24.039656   28319 logs.go:274] 0 containers: []
	W0601 11:56:24.039669   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:24.039728   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:24.070582   28319 logs.go:274] 0 containers: []
	W0601 11:56:24.070594   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:24.070651   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:24.100855   28319 logs.go:274] 0 containers: []
	W0601 11:56:24.100867   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:24.100920   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:24.131557   28319 logs.go:274] 0 containers: []
	W0601 11:56:24.131567   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:24.131627   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:24.161584   28319 logs.go:274] 0 containers: []
	W0601 11:56:24.161596   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:24.161652   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:24.191550   28319 logs.go:274] 0 containers: []
	W0601 11:56:24.191562   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:24.191632   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:24.223779   28319 logs.go:274] 0 containers: []
	W0601 11:56:24.223792   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:24.223849   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:24.254796   28319 logs.go:274] 0 containers: []
	W0601 11:56:24.254809   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:24.254816   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:24.254823   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:24.299122   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:24.299137   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:24.311260   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:24.311276   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:24.366958   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:24.366989   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:24.366995   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:24.380157   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:24.380171   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:23.412235   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:25.414566   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:26.434527   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054347865s)
	I0601 11:56:28.934821   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:29.010609   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:29.042687   28319 logs.go:274] 0 containers: []
	W0601 11:56:29.042700   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:29.042757   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:29.071650   28319 logs.go:274] 0 containers: []
	W0601 11:56:29.071663   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:29.071720   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:29.100444   28319 logs.go:274] 0 containers: []
	W0601 11:56:29.100456   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:29.100516   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:29.130300   28319 logs.go:274] 0 containers: []
	W0601 11:56:29.130313   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:29.130370   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:29.160069   28319 logs.go:274] 0 containers: []
	W0601 11:56:29.160081   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:29.160136   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:29.189354   28319 logs.go:274] 0 containers: []
	W0601 11:56:29.189366   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:29.189420   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:29.218871   28319 logs.go:274] 0 containers: []
	W0601 11:56:29.218883   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:29.218938   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:29.249986   28319 logs.go:274] 0 containers: []
	W0601 11:56:29.249998   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:29.250005   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:29.250011   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:29.289956   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:29.289969   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:29.301893   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:29.301922   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:29.354235   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:29.354260   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:29.354288   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:29.367183   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:29.367196   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:27.915544   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:29.916311   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:32.413455   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:31.425251   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058068657s)
	I0601 11:56:33.925564   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:34.008864   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:34.040390   28319 logs.go:274] 0 containers: []
	W0601 11:56:34.040403   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:34.040457   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:34.070772   28319 logs.go:274] 0 containers: []
	W0601 11:56:34.070785   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:34.070845   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:34.100100   28319 logs.go:274] 0 containers: []
	W0601 11:56:34.100115   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:34.100189   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:34.131817   28319 logs.go:274] 0 containers: []
	W0601 11:56:34.131832   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:34.131891   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:34.165170   28319 logs.go:274] 0 containers: []
	W0601 11:56:34.165182   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:34.165240   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:34.196333   28319 logs.go:274] 0 containers: []
	W0601 11:56:34.196346   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:34.196401   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:34.227456   28319 logs.go:274] 0 containers: []
	W0601 11:56:34.227468   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:34.227522   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:34.255880   28319 logs.go:274] 0 containers: []
	W0601 11:56:34.255896   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:34.255905   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:34.255911   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:34.415552   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:36.913902   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:36.313109   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057210284s)
	I0601 11:56:36.313220   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:36.313228   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:36.355277   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:36.355295   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:36.367936   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:36.367949   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:36.427265   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:36.427277   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:36.427284   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:38.944432   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:39.010467   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:39.042318   28319 logs.go:274] 0 containers: []
	W0601 11:56:39.042330   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:39.042389   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:39.071800   28319 logs.go:274] 0 containers: []
	W0601 11:56:39.071811   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:39.071865   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:39.102235   28319 logs.go:274] 0 containers: []
	W0601 11:56:39.102247   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:39.102304   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:39.133642   28319 logs.go:274] 0 containers: []
	W0601 11:56:39.133655   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:39.133711   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:39.162183   28319 logs.go:274] 0 containers: []
	W0601 11:56:39.162215   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:39.162274   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:39.192299   28319 logs.go:274] 0 containers: []
	W0601 11:56:39.192332   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:39.192402   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:39.224060   28319 logs.go:274] 0 containers: []
	W0601 11:56:39.224073   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:39.224128   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:39.254137   28319 logs.go:274] 0 containers: []
	W0601 11:56:39.254151   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:39.254157   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:39.254164   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:39.296037   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:39.296050   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:39.307439   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:39.307450   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:39.365141   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:39.365151   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:39.365165   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:39.378713   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:39.378727   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:38.915179   28155 pod_ready.go:102] pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace has status "Ready":"False"
	I0601 11:56:40.909076   28155 pod_ready.go:81] duration metric: took 4m0.006566969s waiting for pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace to be "Ready" ...
	E0601 11:56:40.909095   28155 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-jr5fk" in "kube-system" namespace to be "Ready" (will not retry!)
	I0601 11:56:40.909107   28155 pod_ready.go:38] duration metric: took 4m13.097463038s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:56:40.909179   28155 kubeadm.go:630] restartCluster took 4m23.066207931s
	W0601 11:56:40.909259   28155 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0601 11:56:40.909275   28155 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 11:56:41.442670   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.063954081s)
	I0601 11:56:43.943321   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:44.009335   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:44.039950   28319 logs.go:274] 0 containers: []
	W0601 11:56:44.039961   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:44.040015   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:44.069074   28319 logs.go:274] 0 containers: []
	W0601 11:56:44.069087   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:44.069170   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:44.098171   28319 logs.go:274] 0 containers: []
	W0601 11:56:44.098184   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:44.098242   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:44.127158   28319 logs.go:274] 0 containers: []
	W0601 11:56:44.127170   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:44.127231   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:44.158530   28319 logs.go:274] 0 containers: []
	W0601 11:56:44.158543   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:44.158600   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:44.187857   28319 logs.go:274] 0 containers: []
	W0601 11:56:44.187869   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:44.187927   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:44.217215   28319 logs.go:274] 0 containers: []
	W0601 11:56:44.217228   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:44.217282   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:44.251676   28319 logs.go:274] 0 containers: []
	W0601 11:56:44.251689   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:44.251697   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:44.251703   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:44.296360   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:44.296377   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:44.308411   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:44.308422   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:44.363146   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:44.363158   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:44.363165   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:44.375992   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:44.376005   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:46.429887   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053894829s)
	I0601 11:56:48.930355   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:49.010017   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:49.040810   28319 logs.go:274] 0 containers: []
	W0601 11:56:49.040823   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:49.040878   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:49.069024   28319 logs.go:274] 0 containers: []
	W0601 11:56:49.069037   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:49.069090   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:49.100505   28319 logs.go:274] 0 containers: []
	W0601 11:56:49.100519   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:49.100582   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:49.133348   28319 logs.go:274] 0 containers: []
	W0601 11:56:49.133361   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:49.133416   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:49.162816   28319 logs.go:274] 0 containers: []
	W0601 11:56:49.162828   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:49.162886   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:49.194148   28319 logs.go:274] 0 containers: []
	W0601 11:56:49.194160   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:49.194216   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:49.223792   28319 logs.go:274] 0 containers: []
	W0601 11:56:49.223804   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:49.223861   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:49.254312   28319 logs.go:274] 0 containers: []
	W0601 11:56:49.254325   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:49.254332   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:49.254339   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:49.297715   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:49.297732   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:49.309499   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:49.309514   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:49.361498   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:49.361512   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:49.361519   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:49.374038   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:49.374050   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:51.428011   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053974706s)
	I0601 11:56:53.928463   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:54.010281   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:54.041861   28319 logs.go:274] 0 containers: []
	W0601 11:56:54.041873   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:54.041925   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:54.070132   28319 logs.go:274] 0 containers: []
	W0601 11:56:54.070144   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:54.070203   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:54.100461   28319 logs.go:274] 0 containers: []
	W0601 11:56:54.100473   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:54.100529   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:54.129880   28319 logs.go:274] 0 containers: []
	W0601 11:56:54.129891   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:54.129953   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:54.158973   28319 logs.go:274] 0 containers: []
	W0601 11:56:54.158987   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:54.159041   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:54.189002   28319 logs.go:274] 0 containers: []
	W0601 11:56:54.189013   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:54.189069   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:54.219965   28319 logs.go:274] 0 containers: []
	W0601 11:56:54.219978   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:54.220032   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:54.250636   28319 logs.go:274] 0 containers: []
	W0601 11:56:54.250647   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:54.250655   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:54.250664   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:54.294346   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:54.294360   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:54.306971   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:54.306984   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:54.362857   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:54.362870   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:54.362878   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:54.376322   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:54.376337   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:56:56.432087   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055762931s)
	I0601 11:56:58.933231   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:56:59.010295   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:56:59.041877   28319 logs.go:274] 0 containers: []
	W0601 11:56:59.041889   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:56:59.041943   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:56:59.070763   28319 logs.go:274] 0 containers: []
	W0601 11:56:59.070781   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:56:59.070837   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:56:59.100715   28319 logs.go:274] 0 containers: []
	W0601 11:56:59.100727   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:56:59.100786   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:56:59.130622   28319 logs.go:274] 0 containers: []
	W0601 11:56:59.130634   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:56:59.130689   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:56:59.161860   28319 logs.go:274] 0 containers: []
	W0601 11:56:59.161873   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:56:59.161927   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:56:59.190790   28319 logs.go:274] 0 containers: []
	W0601 11:56:59.190804   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:56:59.190859   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:56:59.219375   28319 logs.go:274] 0 containers: []
	W0601 11:56:59.219387   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:56:59.219442   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:56:59.249583   28319 logs.go:274] 0 containers: []
	W0601 11:56:59.249596   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:56:59.249604   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:56:59.249611   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:56:59.291437   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:56:59.291452   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:56:59.303657   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:56:59.303668   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:56:59.357073   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:56:59.357084   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:56:59.357091   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:56:59.369377   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:56:59.369390   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:01.425646   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056269873s)
	I0601 11:57:03.925844   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:04.010201   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:57:04.041233   28319 logs.go:274] 0 containers: []
	W0601 11:57:04.041245   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:57:04.041322   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:57:04.070072   28319 logs.go:274] 0 containers: []
	W0601 11:57:04.070086   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:57:04.070153   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:57:04.100335   28319 logs.go:274] 0 containers: []
	W0601 11:57:04.100354   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:57:04.100437   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:57:04.130281   28319 logs.go:274] 0 containers: []
	W0601 11:57:04.130293   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:57:04.130352   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:57:04.167795   28319 logs.go:274] 0 containers: []
	W0601 11:57:04.167807   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:57:04.167928   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:57:04.197871   28319 logs.go:274] 0 containers: []
	W0601 11:57:04.197884   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:57:04.197940   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:57:04.228277   28319 logs.go:274] 0 containers: []
	W0601 11:57:04.228288   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:57:04.228345   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:57:04.258092   28319 logs.go:274] 0 containers: []
	W0601 11:57:04.258104   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:57:04.258111   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:57:04.258118   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:57:04.311843   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:57:04.311868   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:57:04.311874   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:57:04.324627   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:57:04.324640   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:06.380068   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055441139s)
	I0601 11:57:06.380181   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:57:06.380188   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:57:06.423000   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:57:06.423017   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:57:08.935789   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:09.008127   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:57:09.038879   28319 logs.go:274] 0 containers: []
	W0601 11:57:09.038891   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:57:09.038947   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:57:09.068291   28319 logs.go:274] 0 containers: []
	W0601 11:57:09.068306   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:57:09.068360   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:57:09.096958   28319 logs.go:274] 0 containers: []
	W0601 11:57:09.096969   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:57:09.097039   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:57:09.126729   28319 logs.go:274] 0 containers: []
	W0601 11:57:09.126741   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:57:09.126798   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:57:09.156004   28319 logs.go:274] 0 containers: []
	W0601 11:57:09.156015   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:57:09.156095   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:57:09.184629   28319 logs.go:274] 0 containers: []
	W0601 11:57:09.184642   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:57:09.184699   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:57:09.214073   28319 logs.go:274] 0 containers: []
	W0601 11:57:09.214085   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:57:09.214146   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:57:09.243550   28319 logs.go:274] 0 containers: []
	W0601 11:57:09.243562   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:57:09.243569   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:57:09.243576   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:57:09.286219   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:57:09.286233   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:57:09.298176   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:57:09.298188   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:57:09.352783   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:57:09.352796   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:57:09.352805   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:57:09.366089   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:57:09.366102   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:11.424220   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05813202s)
	I0601 11:57:13.925524   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:14.010071   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:57:14.041352   28319 logs.go:274] 0 containers: []
	W0601 11:57:14.041365   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:57:14.041423   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:57:14.071470   28319 logs.go:274] 0 containers: []
	W0601 11:57:14.071482   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:57:14.071539   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:57:14.100965   28319 logs.go:274] 0 containers: []
	W0601 11:57:14.100977   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:57:14.101111   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:57:14.129799   28319 logs.go:274] 0 containers: []
	W0601 11:57:14.129810   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:57:14.129863   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:57:14.159841   28319 logs.go:274] 0 containers: []
	W0601 11:57:14.159852   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:57:14.159908   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:57:14.190255   28319 logs.go:274] 0 containers: []
	W0601 11:57:14.190270   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:57:14.190341   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:57:14.219539   28319 logs.go:274] 0 containers: []
	W0601 11:57:14.219552   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:57:14.219607   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:57:14.247896   28319 logs.go:274] 0 containers: []
	W0601 11:57:14.247930   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:57:14.247937   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:57:14.247945   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:57:14.291044   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:57:14.291058   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:57:14.304512   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:57:14.304523   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:57:14.356717   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:57:14.356731   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:57:14.356738   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:57:14.368729   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:57:14.368740   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:19.339269   28155 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (38.430441915s)
	I0601 11:57:19.339331   28155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:57:19.351465   28155 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:57:19.359858   28155 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:57:19.359933   28155 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:57:19.369371   28155 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:57:19.369402   28155 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:57:16.428777   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060048525s)
	I0601 11:57:18.929035   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:19.008006   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:57:19.040365   28319 logs.go:274] 0 containers: []
	W0601 11:57:19.040380   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:57:19.040440   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:57:19.073546   28319 logs.go:274] 0 containers: []
	W0601 11:57:19.073561   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:57:19.073626   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:57:19.108192   28319 logs.go:274] 0 containers: []
	W0601 11:57:19.108212   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:57:19.108276   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:57:19.142430   28319 logs.go:274] 0 containers: []
	W0601 11:57:19.142443   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:57:19.142538   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:57:19.175636   28319 logs.go:274] 0 containers: []
	W0601 11:57:19.175650   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:57:19.175719   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:57:19.208195   28319 logs.go:274] 0 containers: []
	W0601 11:57:19.208209   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:57:19.208267   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:57:19.240564   28319 logs.go:274] 0 containers: []
	W0601 11:57:19.240576   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:57:19.240633   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:57:19.273419   28319 logs.go:274] 0 containers: []
	W0601 11:57:19.273432   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:57:19.273439   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:57:19.273446   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:57:19.331449   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:57:19.331463   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:57:19.331471   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:57:19.346208   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:57:19.346222   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:19.860439   28155 out.go:204]   - Generating certificates and keys ...
	I0601 11:57:20.661769   28155 out.go:204]   - Booting up control plane ...
	I0601 11:57:21.407126   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060916068s)
	I0601 11:57:21.407235   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:57:21.407242   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:57:21.450235   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:57:21.450250   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:57:23.962515   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:24.007999   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:57:24.046910   28319 logs.go:274] 0 containers: []
	W0601 11:57:24.046922   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:57:24.046977   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:57:24.078502   28319 logs.go:274] 0 containers: []
	W0601 11:57:24.078515   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:57:24.078608   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:57:24.111688   28319 logs.go:274] 0 containers: []
	W0601 11:57:24.111701   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:57:24.111764   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:57:24.143708   28319 logs.go:274] 0 containers: []
	W0601 11:57:24.143721   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:57:24.143783   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:57:24.175299   28319 logs.go:274] 0 containers: []
	W0601 11:57:24.175313   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:57:24.175387   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:57:24.210853   28319 logs.go:274] 0 containers: []
	W0601 11:57:24.210866   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:57:24.210936   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:57:24.245012   28319 logs.go:274] 0 containers: []
	W0601 11:57:24.245026   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:57:24.245095   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:57:24.281872   28319 logs.go:274] 0 containers: []
	W0601 11:57:24.281885   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:57:24.281892   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:57:24.281899   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:57:24.299283   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:57:24.299300   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:26.356685   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057383504s)
	I0601 11:57:26.356862   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:57:26.356871   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:57:26.401842   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:57:26.401859   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:57:26.414869   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:57:26.414883   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:57:26.467468   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:57:28.967580   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:29.008160   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:57:29.040269   28319 logs.go:274] 0 containers: []
	W0601 11:57:29.040281   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:57:29.040356   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:57:29.072206   28319 logs.go:274] 0 containers: []
	W0601 11:57:29.072220   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:57:29.072281   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:57:29.105279   28319 logs.go:274] 0 containers: []
	W0601 11:57:29.105291   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:57:29.105349   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:57:29.134791   28319 logs.go:274] 0 containers: []
	W0601 11:57:29.134804   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:57:29.134860   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:57:29.164913   28319 logs.go:274] 0 containers: []
	W0601 11:57:29.164925   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:57:29.164979   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:57:29.194121   28319 logs.go:274] 0 containers: []
	W0601 11:57:29.194134   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:57:29.194190   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:57:29.224082   28319 logs.go:274] 0 containers: []
	W0601 11:57:29.224094   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:57:29.224148   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:57:29.254968   28319 logs.go:274] 0 containers: []
	W0601 11:57:29.255008   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:57:29.255015   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:57:29.255022   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:57:29.267556   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:57:29.267568   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:27.711091   28155 out.go:204]   - Configuring RBAC rules ...
	I0601 11:57:28.165356   28155 cni.go:95] Creating CNI manager for ""
	I0601 11:57:28.165369   28155 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 11:57:28.165393   28155 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 11:57:28.165467   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:28.165513   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=af273d6c1d2efba123f39c341ef4e1b2746b42f1 minikube.k8s.io/name=no-preload-20220601115057-16804 minikube.k8s.io/updated_at=2022_06_01T11_57_28_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:28.346909   28155 ops.go:34] apiserver oom_adj: -16
	I0601 11:57:28.346923   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:28.908441   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:29.408460   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:29.909413   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:30.407861   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:30.907854   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:31.408383   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:31.907991   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:32.409782   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:31.323029   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055474965s)
	I0601 11:57:31.323132   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:57:31.323140   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:57:31.365311   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:57:31.365325   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:57:31.377327   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:57:31.377341   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:57:31.435595   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:57:33.936600   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:34.008912   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:57:34.040625   28319 logs.go:274] 0 containers: []
	W0601 11:57:34.040639   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:57:34.040694   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:57:34.072501   28319 logs.go:274] 0 containers: []
	W0601 11:57:34.072513   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:57:34.072569   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:57:34.104579   28319 logs.go:274] 0 containers: []
	W0601 11:57:34.104591   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:57:34.104653   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:57:34.135775   28319 logs.go:274] 0 containers: []
	W0601 11:57:34.135787   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:57:34.135845   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:57:34.166312   28319 logs.go:274] 0 containers: []
	W0601 11:57:34.166323   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:57:34.166381   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:57:34.195560   28319 logs.go:274] 0 containers: []
	W0601 11:57:34.195572   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:57:34.195627   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:57:34.224692   28319 logs.go:274] 0 containers: []
	W0601 11:57:34.224703   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:57:34.224765   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:57:34.255698   28319 logs.go:274] 0 containers: []
	W0601 11:57:34.255710   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:57:34.255717   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:57:34.255727   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:57:34.300652   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:57:34.300667   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:57:34.313320   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:57:34.313334   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:57:34.368671   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:57:34.368683   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:57:34.368690   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:57:34.381336   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:57:34.381349   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:32.909883   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:33.407682   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:33.907836   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:34.408286   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:34.907960   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:35.408597   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:35.908193   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:36.407746   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:36.909619   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:37.407908   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:36.441359   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060024322s)
	I0601 11:57:38.943165   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:39.007618   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:57:39.046794   28319 logs.go:274] 0 containers: []
	W0601 11:57:39.046808   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:57:39.046868   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:57:39.079598   28319 logs.go:274] 0 containers: []
	W0601 11:57:39.079612   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:57:39.079683   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:57:39.109592   28319 logs.go:274] 0 containers: []
	W0601 11:57:39.109604   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:57:39.109661   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:57:39.140083   28319 logs.go:274] 0 containers: []
	W0601 11:57:39.140095   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:57:39.140151   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:57:39.170917   28319 logs.go:274] 0 containers: []
	W0601 11:57:39.170929   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:57:39.170987   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:57:39.200633   28319 logs.go:274] 0 containers: []
	W0601 11:57:39.200644   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:57:39.200698   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:57:39.232233   28319 logs.go:274] 0 containers: []
	W0601 11:57:39.232274   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:57:39.232332   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:57:39.262769   28319 logs.go:274] 0 containers: []
	W0601 11:57:39.262781   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:57:39.262788   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:57:39.262794   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:37.908968   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:38.407947   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:38.907834   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:39.408342   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:39.909689   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:40.407680   28155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 11:57:40.459929   28155 kubeadm.go:1045] duration metric: took 12.294665413s to wait for elevateKubeSystemPrivileges.
	I0601 11:57:40.459949   28155 kubeadm.go:397] StartCluster complete in 5m22.663179926s
	I0601 11:57:40.459970   28155 settings.go:142] acquiring lock: {Name:mk630944d7da2d6f5ad8bc7bd2a815aad6529f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:57:40.460046   28155 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:57:40.460585   28155 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk924f4ba24fa74a0cb052299e0cc4e825b209a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:57:40.978234   28155 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220601115057-16804" rescaled to 1
	I0601 11:57:40.978283   28155 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 11:57:40.978297   28155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 11:57:40.978323   28155 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0601 11:57:40.999829   28155 out.go:177] * Verifying Kubernetes components...
	I0601 11:57:40.978489   28155 config.go:178] Loaded profile config "no-preload-20220601115057-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:57:40.999900   28155 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220601115057-16804"
	I0601 11:57:40.999902   28155 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220601115057-16804"
	I0601 11:57:40.999902   28155 addons.go:65] Setting metrics-server=true in profile "no-preload-20220601115057-16804"
	I0601 11:57:40.999907   28155 addons.go:65] Setting dashboard=true in profile "no-preload-20220601115057-16804"
	I0601 11:57:41.040649   28155 addons.go:153] Setting addon dashboard=true in "no-preload-20220601115057-16804"
	I0601 11:57:41.040654   28155 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220601115057-16804"
	W0601 11:57:41.040664   28155 addons.go:165] addon dashboard should already be in state true
	I0601 11:57:41.040701   28155 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220601115057-16804"
	I0601 11:57:41.040717   28155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	W0601 11:57:41.040689   28155 addons.go:165] addon storage-provisioner should already be in state true
	I0601 11:57:41.040654   28155 addons.go:153] Setting addon metrics-server=true in "no-preload-20220601115057-16804"
	W0601 11:57:41.040775   28155 addons.go:165] addon metrics-server should already be in state true
	I0601 11:57:41.040778   28155 host.go:66] Checking if "no-preload-20220601115057-16804" exists ...
	I0601 11:57:41.040748   28155 host.go:66] Checking if "no-preload-20220601115057-16804" exists ...
	I0601 11:57:41.040811   28155 host.go:66] Checking if "no-preload-20220601115057-16804" exists ...
	I0601 11:57:41.041053   28155 cli_runner.go:164] Run: docker container inspect no-preload-20220601115057-16804 --format={{.State.Status}}
	I0601 11:57:41.041121   28155 cli_runner.go:164] Run: docker container inspect no-preload-20220601115057-16804 --format={{.State.Status}}
	I0601 11:57:41.041154   28155 cli_runner.go:164] Run: docker container inspect no-preload-20220601115057-16804 --format={{.State.Status}}
	I0601 11:57:41.041181   28155 cli_runner.go:164] Run: docker container inspect no-preload-20220601115057-16804 --format={{.State.Status}}
	I0601 11:57:41.098903   28155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20220601115057-16804
	I0601 11:57:41.098931   28155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 11:57:41.172010   28155 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220601115057-16804"
	I0601 11:57:41.211589   28155 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	W0601 11:57:41.211668   28155 addons.go:165] addon default-storageclass should already be in state true
	I0601 11:57:41.189830   28155 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 11:57:41.211763   28155 host.go:66] Checking if "no-preload-20220601115057-16804" exists ...
	I0601 11:57:41.235116   28155 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 11:57:41.271485   28155 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:57:41.272074   28155 cli_runner.go:164] Run: docker container inspect no-preload-20220601115057-16804 --format={{.State.Status}}
	I0601 11:57:41.292117   28155 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0601 11:57:41.292144   28155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 11:57:41.292159   28155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 11:57:41.313668   28155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601115057-16804
	I0601 11:57:41.313669   28155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601115057-16804
	I0601 11:57:41.387138   28155 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 11:57:41.347801   28155 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220601115057-16804" to be "Ready" ...
	I0601 11:57:41.446511   28155 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 11:57:41.446542   28155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 11:57:41.447209   28155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601115057-16804
	I0601 11:57:41.454733   28155 node_ready.go:49] node "no-preload-20220601115057-16804" has status "Ready":"True"
	I0601 11:57:41.454795   28155 node_ready.go:38] duration metric: took 9.276451ms waiting for node "no-preload-20220601115057-16804" to be "Ready" ...
	I0601 11:57:41.454812   28155 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:57:41.465521   28155 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-g4gfh" in "kube-system" namespace to be "Ready" ...
	I0601 11:57:41.472437   28155 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 11:57:41.472476   28155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 11:57:41.472621   28155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220601115057-16804
	I0601 11:57:41.497682   28155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59705 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/no-preload-20220601115057-16804/id_rsa Username:docker}
	I0601 11:57:41.502048   28155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59705 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/no-preload-20220601115057-16804/id_rsa Username:docker}
	I0601 11:57:41.577692   28155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59705 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/no-preload-20220601115057-16804/id_rsa Username:docker}
	I0601 11:57:41.582026   28155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59705 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/no-preload-20220601115057-16804/id_rsa Username:docker}
	I0601 11:57:41.655324   28155 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 11:57:41.655343   28155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 11:57:41.736951   28155 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 11:57:41.736970   28155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 11:57:41.745184   28155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:57:41.829738   28155 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:57:41.829758   28155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 11:57:41.836663   28155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 11:57:41.932059   28155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 11:57:41.933127   28155 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 11:57:41.933144   28155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 11:57:41.963220   28155 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 11:57:41.963233   28155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 11:57:42.145334   28155 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 11:57:42.145351   28155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 11:57:42.255177   28155 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 11:57:42.255192   28155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 11:57:42.436801   28155 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 11:57:42.436821   28155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 11:57:42.530525   28155 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.431587535s)
	I0601 11:57:42.541721   28155 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 11:57:42.566164   28155 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0601 11:57:42.566174   28155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 11:57:42.649431   28155 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 11:57:42.649452   28155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 11:57:42.744750   28155 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 11:57:42.744773   28155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 11:57:42.844258   28155 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:57:42.844276   28155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 11:57:42.942530   28155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 11:57:43.031299   28155 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.0992243s)
	I0601 11:57:43.031327   28155 addons.go:386] Verifying addon metrics-server=true in "no-preload-20220601115057-16804"
	I0601 11:57:43.486892   28155 pod_ready.go:102] pod "coredns-64897985d-g4gfh" in "kube-system" namespace has status "Ready":"False"
	I0601 11:57:43.956967   28155 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.014417476s)
	I0601 11:57:43.981671   28155 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0601 11:57:41.329410   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.066626686s)
	I0601 11:57:41.329597   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:57:41.329608   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:57:41.383544   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:57:41.383564   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:57:41.408721   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:57:41.408743   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:57:41.509315   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:57:41.509346   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:57:41.509369   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:57:44.030515   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:44.507644   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:57:44.537454   28319 logs.go:274] 0 containers: []
	W0601 11:57:44.537481   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:57:44.537554   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:57:44.568183   28319 logs.go:274] 0 containers: []
	W0601 11:57:44.568197   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:57:44.568261   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:57:44.599536   28319 logs.go:274] 0 containers: []
	W0601 11:57:44.599547   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:57:44.599606   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:57:44.630140   28319 logs.go:274] 0 containers: []
	W0601 11:57:44.630154   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:57:44.630217   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:57:44.660777   28319 logs.go:274] 0 containers: []
	W0601 11:57:44.660790   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:57:44.660846   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:57:44.691042   28319 logs.go:274] 0 containers: []
	W0601 11:57:44.691055   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:57:44.691143   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:57:44.720629   28319 logs.go:274] 0 containers: []
	W0601 11:57:44.720641   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:57:44.720699   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:57:44.750426   28319 logs.go:274] 0 containers: []
	W0601 11:57:44.750438   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:57:44.750445   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:57:44.750452   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:57:44.765309   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:57:44.765324   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:44.022335   28155 addons.go:417] enableAddons completed in 3.044053036s
	I0601 11:57:45.987315   28155 pod_ready.go:102] pod "coredns-64897985d-g4gfh" in "kube-system" namespace has status "Ready":"False"
	I0601 11:57:46.984563   28155 pod_ready.go:92] pod "coredns-64897985d-g4gfh" in "kube-system" namespace has status "Ready":"True"
	I0601 11:57:46.984578   28155 pod_ready.go:81] duration metric: took 5.51910266s waiting for pod "coredns-64897985d-g4gfh" in "kube-system" namespace to be "Ready" ...
	I0601 11:57:46.984584   28155 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-t97fz" in "kube-system" namespace to be "Ready" ...
	I0601 11:57:46.990513   28155 pod_ready.go:92] pod "coredns-64897985d-t97fz" in "kube-system" namespace has status "Ready":"True"
	I0601 11:57:46.990522   28155 pod_ready.go:81] duration metric: took 5.933055ms waiting for pod "coredns-64897985d-t97fz" in "kube-system" namespace to be "Ready" ...
	I0601 11:57:46.990528   28155 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20220601115057-16804" in "kube-system" namespace to be "Ready" ...
	I0601 11:57:46.995569   28155 pod_ready.go:92] pod "etcd-no-preload-20220601115057-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 11:57:46.995578   28155 pod_ready.go:81] duration metric: took 5.045027ms waiting for pod "etcd-no-preload-20220601115057-16804" in "kube-system" namespace to be "Ready" ...
	I0601 11:57:46.995584   28155 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20220601115057-16804" in "kube-system" namespace to be "Ready" ...
	I0601 11:57:47.000562   28155 pod_ready.go:92] pod "kube-apiserver-no-preload-20220601115057-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 11:57:47.000571   28155 pod_ready.go:81] duration metric: took 4.982774ms waiting for pod "kube-apiserver-no-preload-20220601115057-16804" in "kube-system" namespace to be "Ready" ...
	I0601 11:57:47.000578   28155 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20220601115057-16804" in "kube-system" namespace to be "Ready" ...
	I0601 11:57:47.005154   28155 pod_ready.go:92] pod "kube-controller-manager-no-preload-20220601115057-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 11:57:47.005165   28155 pod_ready.go:81] duration metric: took 4.580517ms waiting for pod "kube-controller-manager-no-preload-20220601115057-16804" in "kube-system" namespace to be "Ready" ...
	I0601 11:57:47.005172   28155 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-77tsv" in "kube-system" namespace to be "Ready" ...
	I0601 11:57:47.383579   28155 pod_ready.go:92] pod "kube-proxy-77tsv" in "kube-system" namespace has status "Ready":"True"
	I0601 11:57:47.383590   28155 pod_ready.go:81] duration metric: took 378.417828ms waiting for pod "kube-proxy-77tsv" in "kube-system" namespace to be "Ready" ...
	I0601 11:57:47.383597   28155 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20220601115057-16804" in "kube-system" namespace to be "Ready" ...
	I0601 11:57:47.782529   28155 pod_ready.go:92] pod "kube-scheduler-no-preload-20220601115057-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 11:57:47.782539   28155 pod_ready.go:81] duration metric: took 398.94254ms waiting for pod "kube-scheduler-no-preload-20220601115057-16804" in "kube-system" namespace to be "Ready" ...
	I0601 11:57:47.782547   28155 pod_ready.go:38] duration metric: took 6.327785655s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:57:47.782563   28155 api_server.go:51] waiting for apiserver process to appear ...
	I0601 11:57:47.782617   28155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:47.795517   28155 api_server.go:71] duration metric: took 6.817292849s to wait for apiserver process to appear ...
	I0601 11:57:47.795534   28155 api_server.go:87] waiting for apiserver healthz status ...
	I0601 11:57:47.795543   28155 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59709/healthz ...
	I0601 11:57:47.801557   28155 api_server.go:266] https://127.0.0.1:59709/healthz returned 200:
	ok
	I0601 11:57:47.802639   28155 api_server.go:140] control plane version: v1.23.6
	I0601 11:57:47.802647   28155 api_server.go:130] duration metric: took 7.108692ms to wait for apiserver health ...
	I0601 11:57:47.802654   28155 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 11:57:47.987584   28155 system_pods.go:59] 9 kube-system pods found
	I0601 11:57:47.987599   28155 system_pods.go:61] "coredns-64897985d-g4gfh" [5a668ae2-1ba8-4ae9-9c6a-ac07279e31f9] Running
	I0601 11:57:47.987604   28155 system_pods.go:61] "coredns-64897985d-t97fz" [e084f502-bc7c-4ba5-9f07-990582d89dcd] Running
	I0601 11:57:47.987607   28155 system_pods.go:61] "etcd-no-preload-20220601115057-16804" [07565dba-74b1-4ce7-84b5-6dc3870c5f14] Running
	I0601 11:57:47.987611   28155 system_pods.go:61] "kube-apiserver-no-preload-20220601115057-16804" [6877c44e-2636-4e51-9471-f303d0d0bd86] Running
	I0601 11:57:47.987615   28155 system_pods.go:61] "kube-controller-manager-no-preload-20220601115057-16804" [9a06a3f1-e0cd-412f-96b3-7d4e551347e4] Running
	I0601 11:57:47.987618   28155 system_pods.go:61] "kube-proxy-77tsv" [9fb29050-1356-4744-bd4d-456dbacdf15c] Running
	I0601 11:57:47.987622   28155 system_pods.go:61] "kube-scheduler-no-preload-20220601115057-16804" [40175cd4-f440-44d8-b296-c7283261a1e4] Running
	I0601 11:57:47.987626   28155 system_pods.go:61] "metrics-server-b955d9d8-kz2wj" [85328a99-1f1c-4ee1-b140-b8b04cc702da] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 11:57:47.987630   28155 system_pods.go:61] "storage-provisioner" [60233eed-e0b4-4f81-bd4e-ec53371ffc27] Running
	I0601 11:57:47.987634   28155 system_pods.go:74] duration metric: took 184.979327ms to wait for pod list to return data ...
	I0601 11:57:47.987639   28155 default_sa.go:34] waiting for default service account to be created ...
	I0601 11:57:48.221245   28155 default_sa.go:45] found service account: "default"
	I0601 11:57:48.221266   28155 default_sa.go:55] duration metric: took 233.624988ms for default service account to be created ...
	I0601 11:57:48.221277   28155 system_pods.go:116] waiting for k8s-apps to be running ...
	I0601 11:57:48.386943   28155 system_pods.go:86] 8 kube-system pods found
	I0601 11:57:48.386964   28155 system_pods.go:89] "coredns-64897985d-g4gfh" [5a668ae2-1ba8-4ae9-9c6a-ac07279e31f9] Running
	I0601 11:57:48.386974   28155 system_pods.go:89] "etcd-no-preload-20220601115057-16804" [07565dba-74b1-4ce7-84b5-6dc3870c5f14] Running
	I0601 11:57:48.386984   28155 system_pods.go:89] "kube-apiserver-no-preload-20220601115057-16804" [6877c44e-2636-4e51-9471-f303d0d0bd86] Running
	I0601 11:57:48.386995   28155 system_pods.go:89] "kube-controller-manager-no-preload-20220601115057-16804" [9a06a3f1-e0cd-412f-96b3-7d4e551347e4] Running
	I0601 11:57:48.387005   28155 system_pods.go:89] "kube-proxy-77tsv" [9fb29050-1356-4744-bd4d-456dbacdf15c] Running
	I0601 11:57:48.387025   28155 system_pods.go:89] "kube-scheduler-no-preload-20220601115057-16804" [40175cd4-f440-44d8-b296-c7283261a1e4] Running
	I0601 11:57:48.387032   28155 system_pods.go:89] "metrics-server-b955d9d8-kz2wj" [85328a99-1f1c-4ee1-b140-b8b04cc702da] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 11:57:48.387038   28155 system_pods.go:89] "storage-provisioner" [60233eed-e0b4-4f81-bd4e-ec53371ffc27] Running
	I0601 11:57:48.387045   28155 system_pods.go:126] duration metric: took 165.76319ms to wait for k8s-apps to be running ...
	I0601 11:57:48.387051   28155 system_svc.go:44] waiting for kubelet service to be running ....
	I0601 11:57:48.387105   28155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:57:48.423774   28155 system_svc.go:56] duration metric: took 36.719143ms WaitForService to wait for kubelet.
	I0601 11:57:48.423792   28155 kubeadm.go:572] duration metric: took 7.445578877s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0601 11:57:48.423811   28155 node_conditions.go:102] verifying NodePressure condition ...
	I0601 11:57:48.582300   28155 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 11:57:48.582312   28155 node_conditions.go:123] node cpu capacity is 6
	I0601 11:57:48.582320   28155 node_conditions.go:105] duration metric: took 158.507046ms to run NodePressure ...
	I0601 11:57:48.582327   28155 start.go:213] waiting for startup goroutines ...
	I0601 11:57:48.614642   28155 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0601 11:57:48.661447   28155 out.go:177] * Done! kubectl is now configured to use "no-preload-20220601115057-16804" cluster and "default" namespace by default
	I0601 11:57:46.833468   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.06815266s)
	I0601 11:57:46.833611   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:57:46.833623   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:57:46.894511   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:57:46.894539   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:57:46.907075   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:57:46.907090   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:57:46.971671   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:57:49.472151   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:49.507935   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:57:49.537886   28319 logs.go:274] 0 containers: []
	W0601 11:57:49.537898   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:57:49.537960   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:57:49.568803   28319 logs.go:274] 0 containers: []
	W0601 11:57:49.568816   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:57:49.568872   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:57:49.598891   28319 logs.go:274] 0 containers: []
	W0601 11:57:49.598903   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:57:49.598962   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:57:49.628803   28319 logs.go:274] 0 containers: []
	W0601 11:57:49.628815   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:57:49.628874   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:57:49.660107   28319 logs.go:274] 0 containers: []
	W0601 11:57:49.660118   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:57:49.660209   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:57:49.691421   28319 logs.go:274] 0 containers: []
	W0601 11:57:49.691437   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:57:49.691507   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:57:49.722844   28319 logs.go:274] 0 containers: []
	W0601 11:57:49.722857   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:57:49.722911   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:57:49.755171   28319 logs.go:274] 0 containers: []
	W0601 11:57:49.755183   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:57:49.755191   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:57:49.755211   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:57:49.768071   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:57:49.768082   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:51.830872   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.06280221s)
	I0601 11:57:51.830991   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:57:51.830999   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:57:51.895350   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:57:51.895372   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:57:51.910561   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:57:51.910601   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:57:51.975211   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:57:54.475645   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:54.507404   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 11:57:54.546927   28319 logs.go:274] 0 containers: []
	W0601 11:57:54.546940   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 11:57:54.547000   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 11:57:54.579713   28319 logs.go:274] 0 containers: []
	W0601 11:57:54.579728   28319 logs.go:276] No container was found matching "etcd"
	I0601 11:57:54.579797   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 11:57:54.614843   28319 logs.go:274] 0 containers: []
	W0601 11:57:54.614860   28319 logs.go:276] No container was found matching "coredns"
	I0601 11:57:54.614948   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 11:57:54.651551   28319 logs.go:274] 0 containers: []
	W0601 11:57:54.651565   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 11:57:54.651624   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 11:57:54.687625   28319 logs.go:274] 0 containers: []
	W0601 11:57:54.687640   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 11:57:54.687712   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 11:57:54.723794   28319 logs.go:274] 0 containers: []
	W0601 11:57:54.723808   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 11:57:54.723872   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 11:57:54.759036   28319 logs.go:274] 0 containers: []
	W0601 11:57:54.759050   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 11:57:54.759111   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 11:57:54.791361   28319 logs.go:274] 0 containers: []
	W0601 11:57:54.791375   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 11:57:54.791382   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 11:57:54.791390   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 11:57:54.839700   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 11:57:54.839716   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 11:57:54.854532   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 11:57:54.854547   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 11:57:54.915142   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 11:57:54.915157   28319 logs.go:123] Gathering logs for Docker ...
	I0601 11:57:54.915164   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 11:57:54.928393   28319 logs.go:123] Gathering logs for container status ...
	I0601 11:57:54.928405   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 11:57:56.983268   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054875531s)
	I0601 11:57:59.485573   28319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:57:59.496486   28319 kubeadm.go:630] restartCluster took 4m4.930290056s
	W0601 11:57:59.496562   28319 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0601 11:57:59.496576   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 11:57:59.913633   28319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:57:59.923079   28319 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:57:59.931076   28319 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 11:57:59.931127   28319 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:57:59.939179   28319 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 11:57:59.939204   28319 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 11:58:00.683895   28319 out.go:204]   - Generating certificates and keys ...
	I0601 11:58:01.523528   28319 out.go:204]   - Booting up control plane ...
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-01 18:52:14 UTC, end at Wed 2022-06-01 18:58:48 UTC. --
	Jun 01 18:57:07 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:07.873040933Z" level=info msg="ignoring event" container=72a124d3de26891a46029aef7ff25a8fc05ae016371085db9e7bfa75fcf2761a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:57:17 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:17.964549887Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=616b35d7cfd8399befecc2e9313100de26bd13682553039ca5105b429fe9405f
	Jun 01 18:57:18 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:18.021035647Z" level=info msg="ignoring event" container=616b35d7cfd8399befecc2e9313100de26bd13682553039ca5105b429fe9405f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:57:18 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:18.121322161Z" level=info msg="ignoring event" container=a666382066585752b99d3ea2b0612aa09dbaf132d6fe010fc8e99f758971dc2d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:57:18 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:18.223216098Z" level=info msg="ignoring event" container=5e1e864e6525f7d441376e80d2cd57ae3566a0a7f2e919b2ae7d23492c82ff40 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:57:18 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:18.322728007Z" level=info msg="ignoring event" container=dcedf6ae48a5f2ca7b69578459b074e060aaf794d440fb1e70c03d54fb9654a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:57:18 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:18.449998975Z" level=info msg="ignoring event" container=92476de0b2fa718d9ae037567aff75a8f6923252987b4a69a1e54b3a60d329c4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:57:44 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:44.152194441Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 18:57:44 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:44.152684609Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 18:57:44 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:44.154000434Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 18:57:45 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:45.518368421Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jun 01 18:57:45 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:45.733924115Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jun 01 18:57:47 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:47.676468287Z" level=info msg="ignoring event" container=5576ebcd6b3efa134b9c442b0647348cef29804d2b542784afe1b97b7a2dc22e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:57:47 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:47.848223410Z" level=info msg="ignoring event" container=f869517dd5468d6a46bb30af6635548849d08d4fcd3bbde5c425eaf3d60cfbfc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:57:49 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:49.354790084Z" level=info msg="ignoring event" container=c16ac3e56aaa7f2f29a5982339afdeb3686f792e5c1b87df15682be069de7dd7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:57:49 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:49.456786438Z" level=warning msg="reference for unknown type: " digest="sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2" remote="docker.io/kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2"
	Jun 01 18:57:50 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:50.321352553Z" level=info msg="ignoring event" container=01c76d66326a0454297a81f8f616dce83a05cd37a6537e261b874840deee1f08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:57:59 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:59.374088220Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 18:57:59 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:59.374132104Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 18:57:59 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:57:59.375540565Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 18:58:07 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:58:07.452185549Z" level=info msg="ignoring event" container=13dbafc5af21fed703e0494d3cb234721f4e631fb759cecabf5c8020b24484f9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 18:58:45 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:58:45.036289948Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 18:58:45 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:58:45.036563207Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 18:58:45 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:58:45.037896959Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 18:58:45 no-preload-20220601115057-16804 dockerd[130]: time="2022-06-01T18:58:45.798686167Z" level=info msg="ignoring event" container=8fec8904d9a31c6ededca48fbf13a0f3cf876c47f858bc7ebc73b82508490626 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	8fec8904d9a31       a90209bb39e3d                                                                                    3 seconds ago        Exited              dashboard-metrics-scraper   3                   cd910f8cc8526
	7d328481c5cd4       kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2   54 seconds ago       Running             kubernetes-dashboard        0                   f4d13ae8da737
	e2cc248695c73       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   2b1f146b179ae
	ba2ed58902e20       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   71f7928ea4b33
	c7958e3694523       4c03754524064                                                                                    About a minute ago   Running             kube-proxy                  0                   f2eac62c164d2
	b0b923861477b       8fa62c12256df                                                                                    About a minute ago   Running             kube-apiserver              2                   4772e1d81d09c
	8b39e72f9af37       25f8c7f3da61c                                                                                    About a minute ago   Running             etcd                        2                   4ac7db1243b2f
	14b019ef5d0f3       595f327f224a4                                                                                    About a minute ago   Running             kube-scheduler              2                   f7df7ad08e5ec
	3b5d6839a5a99       df7b72818ad2e                                                                                    About a minute ago   Running             kube-controller-manager     2                   46160156c799a
	
	* 
	* ==> coredns [ba2ed58902e2] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220601115057-16804
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220601115057-16804
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af273d6c1d2efba123f39c341ef4e1b2746b42f1
	                    minikube.k8s.io/name=no-preload-20220601115057-16804
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T11_57_28_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 18:57:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220601115057-16804
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Jun 2022 18:58:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 18:58:41 +0000   Wed, 01 Jun 2022 18:58:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 18:58:41 +0000   Wed, 01 Jun 2022 18:58:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 18:58:41 +0000   Wed, 01 Jun 2022 18:58:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Jun 2022 18:58:41 +0000   Wed, 01 Jun 2022 18:58:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    no-preload-20220601115057-16804
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	  System UUID:                3a379177-f2a3-4802-80f1-2537a7a88138
	  Boot ID:                    60fb2c64-72ec-41ec-9cdf-c18d3fde7c60
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-g4gfh                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     68s
	  kube-system                 etcd-no-preload-20220601115057-16804                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         82s
	  kube-system                 kube-apiserver-no-preload-20220601115057-16804             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 kube-controller-manager-no-preload-20220601115057-16804    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 kube-proxy-77tsv                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 kube-scheduler-no-preload-20220601115057-16804             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 metrics-server-b955d9d8-kz2wj                              100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         66s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-59knw                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kubernetes-dashboard        kubernetes-dashboard-8469778f77-rnkpm                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 66s                kube-proxy  
	  Normal  NodeHasSufficientMemory  87s (x4 over 87s)  kubelet     Node no-preload-20220601115057-16804 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    87s (x4 over 87s)  kubelet     Node no-preload-20220601115057-16804 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     87s (x3 over 87s)  kubelet     Node no-preload-20220601115057-16804 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  87s                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 87s                kubelet     Starting kubelet.
	  Normal  Starting                 80s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  80s                kubelet     Node no-preload-20220601115057-16804 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    80s                kubelet     Node no-preload-20220601115057-16804 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     80s                kubelet     Node no-preload-20220601115057-16804 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             80s                kubelet     Node no-preload-20220601115057-16804 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  80s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                70s                kubelet     Node no-preload-20220601115057-16804 status is now: NodeReady
	  Normal  Starting                 7s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s (x2 over 7s)    kubelet     Node no-preload-20220601115057-16804 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s (x2 over 7s)    kubelet     Node no-preload-20220601115057-16804 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s (x2 over 7s)    kubelet     Node no-preload-20220601115057-16804 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             7s                 kubelet     Node no-preload-20220601115057-16804 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  7s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7s                 kubelet     Node no-preload-20220601115057-16804 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [8b39e72f9af3] <==
	* {"level":"info","ts":"2022-06-01T18:57:22.277Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-01T18:57:22.277Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-01T18:57:22.278Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-01T18:57:22.862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2022-06-01T18:57:22.862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-01T18:57:22.862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2022-06-01T18:57:22.862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2022-06-01T18:57:22.862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-01T18:57:22.862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2022-06-01T18:57:22.862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-01T18:57:22.862Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T18:57:22.863Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T18:57:22.863Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T18:57:22.863Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T18:57:22.863Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:no-preload-20220601115057-16804 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T18:57:22.863Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T18:57:22.863Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T18:57:22.863Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T18:57:22.863Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T18:57:22.864Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-01T18:57:22.865Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"warn","ts":"2022-06-01T18:58:45.781Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"109.451323ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/coredns-64897985d-g4gfh.16f49436aae4d69f\" ","response":"range_response_count:1 size:784"}
	{"level":"info","ts":"2022-06-01T18:58:45.781Z","caller":"traceutil/trace.go:171","msg":"trace[1209650079] range","detail":"{range_begin:/registry/events/kube-system/coredns-64897985d-g4gfh.16f49436aae4d69f; range_end:; response_count:1; response_revision:711; }","duration":"109.575133ms","start":"2022-06-01T18:58:45.671Z","end":"2022-06-01T18:58:45.781Z","steps":["trace[1209650079] 'agreement among raft nodes before linearized reading'  (duration: 29.102632ms)","trace[1209650079] 'range keys from in-memory index tree'  (duration: 80.31705ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T18:58:45.781Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"103.439331ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" ","response":"range_response_count:1 size:226"}
	{"level":"info","ts":"2022-06-01T18:58:45.781Z","caller":"traceutil/trace.go:171","msg":"trace[1193670213] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:711; }","duration":"103.505065ms","start":"2022-06-01T18:58:45.678Z","end":"2022-06-01T18:58:45.781Z","steps":["trace[1193670213] 'agreement among raft nodes before linearized reading'  (duration: 22.832785ms)","trace[1193670213] 'range keys from in-memory index tree'  (duration: 80.480804ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  18:58:49 up  1:01,  0 users,  load average: 0.55, 0.89, 1.06
	Linux no-preload-20220601115057-16804 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [b0b923861477] <==
	* I0601 18:57:26.235766       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0601 18:57:26.259882       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0601 18:57:26.304430       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0601 18:57:26.308157       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0601 18:57:26.309310       1 controller.go:611] quota admission added evaluator for: endpoints
	I0601 18:57:26.312967       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0601 18:57:27.096999       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0601 18:57:27.940164       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0601 18:57:27.947591       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0601 18:57:27.955362       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0601 18:57:28.145956       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0601 18:57:40.531931       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0601 18:57:40.880219       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0601 18:57:42.639364       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0601 18:57:42.964846       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.107.23.182]
	W0601 18:57:43.752250       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 18:57:43.752508       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 18:57:43.752605       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0601 18:57:43.943686       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.99.101.199]
	I0601 18:57:43.957667       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.109.135.92]
	W0601 18:58:43.709304       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 18:58:43.709403       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 18:58:43.709410       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [3b5d6839a5a9] <==
	* I0601 18:57:43.765263       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 18:57:43.770141       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0601 18:57:43.772428       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 18:57:43.772502       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 18:57:43.776849       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 18:57:43.777210       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 18:57:43.784001       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 18:57:43.784031       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 18:57:43.790178       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 18:57:43.790476       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 18:57:43.836853       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-59knw"
	I0601 18:57:43.854862       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-rnkpm"
	E0601 18:58:41.292227       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0601 18:58:41.293398       1 event.go:294] "Event occurred" object="no-preload-20220601115057-16804" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node no-preload-20220601115057-16804 status is now: NodeNotReady"
	W0601 18:58:41.297555       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	I0601 18:58:41.301429       1 event.go:294] "Event occurred" object="kube-system/etcd-no-preload-20220601115057-16804" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 18:58:41.305565       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77-rnkpm" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 18:58:41.395144       1 event.go:294] "Event occurred" object="kube-system/kube-proxy-77tsv" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 18:58:41.400980       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d-g4gfh" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 18:58:41.407465       1 event.go:294] "Event occurred" object="kube-system/storage-provisioner" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 18:58:41.494747       1 event.go:294] "Event occurred" object="kube-system/kube-scheduler-no-preload-20220601115057-16804" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 18:58:41.501177       1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager-no-preload-20220601115057-16804" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 18:58:41.507083       1 node_lifecycle_controller.go:1163] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0601 18:58:41.507361       1 event.go:294] "Event occurred" object="kube-system/kube-apiserver-no-preload-20220601115057-16804" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0601 18:58:46.507985       1 node_lifecycle_controller.go:1190] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [c7958e369452] <==
	* I0601 18:57:42.453004       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0601 18:57:42.453640       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0601 18:57:42.453728       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 18:57:42.636547       1 server_others.go:206] "Using iptables Proxier"
	I0601 18:57:42.636590       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 18:57:42.636598       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 18:57:42.636614       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 18:57:42.636881       1 server.go:656] "Version info" version="v1.23.6"
	I0601 18:57:42.637580       1 config.go:226] "Starting endpoint slice config controller"
	I0601 18:57:42.637604       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 18:57:42.637645       1 config.go:317] "Starting service config controller"
	I0601 18:57:42.637648       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 18:57:42.737986       1 shared_informer.go:247] Caches are synced for service config 
	I0601 18:57:42.738191       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [14b019ef5d0f] <==
	* E0601 18:57:25.066282       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0601 18:57:25.066645       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0601 18:57:25.066675       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0601 18:57:25.066843       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 18:57:25.066856       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 18:57:25.066936       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 18:57:25.067019       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0601 18:57:25.067202       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 18:57:25.067237       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0601 18:57:25.904963       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 18:57:25.905002       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0601 18:57:25.905763       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 18:57:25.905795       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0601 18:57:25.953099       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 18:57:25.953150       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0601 18:57:25.977555       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 18:57:25.977592       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 18:57:25.991608       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0601 18:57:25.991644       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0601 18:57:25.996441       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0601 18:57:25.996477       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0601 18:57:26.083117       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0601 18:57:26.083171       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0601 18:57:26.427030       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0601 18:57:28.759621       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 18:52:14 UTC, end at Wed 2022-06-01 18:58:49 UTC. --
	Jun 01 18:58:42 no-preload-20220601115057-16804 kubelet[7281]: I0601 18:58:42.909136    7281 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fb29050-1356-4744-bd4d-456dbacdf15c-lib-modules\") pod \"kube-proxy-77tsv\" (UID: \"9fb29050-1356-4744-bd4d-456dbacdf15c\") " pod="kube-system/kube-proxy-77tsv"
	Jun 01 18:58:42 no-preload-20220601115057-16804 kubelet[7281]: I0601 18:58:42.909155    7281 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/50c87a07-aa54-4366-9ac1-a31efd11fa2e-tmp-volume\") pod \"dashboard-metrics-scraper-56974995fc-59knw\" (UID: \"50c87a07-aa54-4366-9ac1-a31efd11fa2e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-59knw"
	Jun 01 18:58:42 no-preload-20220601115057-16804 kubelet[7281]: I0601 18:58:42.909210    7281 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnk96\" (UniqueName: \"kubernetes.io/projected/50c87a07-aa54-4366-9ac1-a31efd11fa2e-kube-api-access-wnk96\") pod \"dashboard-metrics-scraper-56974995fc-59knw\" (UID: \"50c87a07-aa54-4366-9ac1-a31efd11fa2e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-59knw"
	Jun 01 18:58:42 no-preload-20220601115057-16804 kubelet[7281]: I0601 18:58:42.909344    7281 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a668ae2-1ba8-4ae9-9c6a-ac07279e31f9-config-volume\") pod \"coredns-64897985d-g4gfh\" (UID: \"5a668ae2-1ba8-4ae9-9c6a-ac07279e31f9\") " pod="kube-system/coredns-64897985d-g4gfh"
	Jun 01 18:58:42 no-preload-20220601115057-16804 kubelet[7281]: I0601 18:58:42.909486    7281 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw4h4\" (UniqueName: \"kubernetes.io/projected/85328a99-1f1c-4ee1-b140-b8b04cc702da-kube-api-access-bw4h4\") pod \"metrics-server-b955d9d8-kz2wj\" (UID: \"85328a99-1f1c-4ee1-b140-b8b04cc702da\") " pod="kube-system/metrics-server-b955d9d8-kz2wj"
	Jun 01 18:58:42 no-preload-20220601115057-16804 kubelet[7281]: I0601 18:58:42.909557    7281 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fb29050-1356-4744-bd4d-456dbacdf15c-xtables-lock\") pod \"kube-proxy-77tsv\" (UID: \"9fb29050-1356-4744-bd4d-456dbacdf15c\") " pod="kube-system/kube-proxy-77tsv"
	Jun 01 18:58:42 no-preload-20220601115057-16804 kubelet[7281]: I0601 18:58:42.909608    7281 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz2z4\" (UniqueName: \"kubernetes.io/projected/9fb29050-1356-4744-bd4d-456dbacdf15c-kube-api-access-bz2z4\") pod \"kube-proxy-77tsv\" (UID: \"9fb29050-1356-4744-bd4d-456dbacdf15c\") " pod="kube-system/kube-proxy-77tsv"
	Jun 01 18:58:42 no-preload-20220601115057-16804 kubelet[7281]: I0601 18:58:42.909690    7281 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmx7v\" (UniqueName: \"kubernetes.io/projected/9a79845e-efd3-46b7-80bb-0c7309ca22ca-kube-api-access-hmx7v\") pod \"kubernetes-dashboard-8469778f77-rnkpm\" (UID: \"9a79845e-efd3-46b7-80bb-0c7309ca22ca\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-rnkpm"
	Jun 01 18:58:42 no-preload-20220601115057-16804 kubelet[7281]: I0601 18:58:42.909725    7281 reconciler.go:157] "Reconciler: start to sync state"
	Jun 01 18:58:44 no-preload-20220601115057-16804 kubelet[7281]: I0601 18:58:44.077052    7281 request.go:665] Waited for 1.15567193s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jun 01 18:58:44 no-preload-20220601115057-16804 kubelet[7281]: E0601 18:58:44.086155    7281 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"etcd-no-preload-20220601115057-16804\" already exists" pod="kube-system/etcd-no-preload-20220601115057-16804"
	Jun 01 18:58:44 no-preload-20220601115057-16804 kubelet[7281]: E0601 18:58:44.302409    7281 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-scheduler-no-preload-20220601115057-16804\" already exists" pod="kube-system/kube-scheduler-no-preload-20220601115057-16804"
	Jun 01 18:58:44 no-preload-20220601115057-16804 kubelet[7281]: E0601 18:58:44.480924    7281 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-no-preload-20220601115057-16804\" already exists" pod="kube-system/kube-controller-manager-no-preload-20220601115057-16804"
	Jun 01 18:58:44 no-preload-20220601115057-16804 kubelet[7281]: E0601 18:58:44.681458    7281 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-apiserver-no-preload-20220601115057-16804\" already exists" pod="kube-system/kube-apiserver-no-preload-20220601115057-16804"
	Jun 01 18:58:45 no-preload-20220601115057-16804 kubelet[7281]: E0601 18:58:45.038355    7281 remote_image.go:216] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 01 18:58:45 no-preload-20220601115057-16804 kubelet[7281]: E0601 18:58:45.038439    7281 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 01 18:58:45 no-preload-20220601115057-16804 kubelet[7281]: E0601 18:58:45.038529    7281 kuberuntime_manager.go:919] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-bw4h4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHa
ndler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMess
agePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-b955d9d8-kz2wj_kube-system(85328a99-1f1c-4ee1-b140-b8b04cc702da): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jun 01 18:58:45 no-preload-20220601115057-16804 kubelet[7281]: E0601 18:58:45.038577    7281 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-b955d9d8-kz2wj" podUID=85328a99-1f1c-4ee1-b140-b8b04cc702da
	Jun 01 18:58:45 no-preload-20220601115057-16804 kubelet[7281]: I0601 18:58:45.582016    7281 scope.go:110] "RemoveContainer" containerID="13dbafc5af21fed703e0494d3cb234721f4e631fb759cecabf5c8020b24484f9"
	Jun 01 18:58:45 no-preload-20220601115057-16804 kubelet[7281]: W0601 18:58:45.825072    7281 container.go:489] Failed to get RecentStats("/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50c87a07_aa54_4366_9ac1_a31efd11fa2e.slice/docker-8fec8904d9a31c6ededca48fbf13a0f3cf876c47f858bc7ebc73b82508490626.scope") while determining the next housekeeping: unable to find data in memory cache
	Jun 01 18:58:45 no-preload-20220601115057-16804 kubelet[7281]: I0601 18:58:45.937578    7281 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-59knw through plugin: invalid network status for"
	Jun 01 18:58:45 no-preload-20220601115057-16804 kubelet[7281]: I0601 18:58:45.943289    7281 scope.go:110] "RemoveContainer" containerID="13dbafc5af21fed703e0494d3cb234721f4e631fb759cecabf5c8020b24484f9"
	Jun 01 18:58:45 no-preload-20220601115057-16804 kubelet[7281]: I0601 18:58:45.944478    7281 scope.go:110] "RemoveContainer" containerID="8fec8904d9a31c6ededca48fbf13a0f3cf876c47f858bc7ebc73b82508490626"
	Jun 01 18:58:45 no-preload-20220601115057-16804 kubelet[7281]: E0601 18:58:45.944751    7281 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-56974995fc-59knw_kubernetes-dashboard(50c87a07-aa54-4366-9ac1-a31efd11fa2e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-59knw" podUID=50c87a07-aa54-4366-9ac1-a31efd11fa2e
	Jun 01 18:58:46 no-preload-20220601115057-16804 kubelet[7281]: I0601 18:58:46.950518    7281 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-59knw through plugin: invalid network status for"
	
	* 
	* ==> kubernetes-dashboard [7d328481c5cd] <==
	* 2022/06/01 18:57:54 Using namespace: kubernetes-dashboard
	2022/06/01 18:57:54 Using in-cluster config to connect to apiserver
	2022/06/01 18:57:54 Using secret token for csrf signing
	2022/06/01 18:57:54 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/06/01 18:57:54 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/06/01 18:57:54 Successful initial request to the apiserver, version: v1.23.6
	2022/06/01 18:57:54 Generating JWE encryption key
	2022/06/01 18:57:54 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/06/01 18:57:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/06/01 18:57:54 Initializing JWE encryption key from synchronized object
	2022/06/01 18:57:54 Creating in-cluster Sidecar client
	2022/06/01 18:57:54 Serving insecurely on HTTP port: 9090
	2022/06/01 18:57:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/01 18:58:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/01 18:57:54 Starting overwatch
	
	* 
	* ==> storage-provisioner [e2cc248695c7] <==
	* I0601 18:57:43.658769       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0601 18:57:43.666905       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0601 18:57:43.666983       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0601 18:57:43.673667       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0601 18:57:43.673850       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-20220601115057-16804_0a75bbf8-8297-4952-afb0-d5692f3b65b7!
	I0601 18:57:43.673970       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6d8917ce-acb3-4188-9f13-0eda07809269", APIVersion:"v1", ResourceVersion:"513", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-20220601115057-16804_0a75bbf8-8297-4952-afb0-d5692f3b65b7 became leader
	I0601 18:57:43.776125       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-20220601115057-16804_0a75bbf8-8297-4952-afb0-d5692f3b65b7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220601115057-16804 -n no-preload-20220601115057-16804
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220601115057-16804 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-b955d9d8-kz2wj
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220601115057-16804 describe pod metrics-server-b955d9d8-kz2wj
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220601115057-16804 describe pod metrics-server-b955d9d8-kz2wj: exit status 1 (284.814736ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-b955d9d8-kz2wj" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220601115057-16804 describe pod metrics-server-b955d9d8-kz2wj: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/Pause (43.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (575.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0601 12:02:10.132838   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601115057-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:02:30.612827   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601115057-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:02:46.747954   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601113005-16804/client.crt: no such file or directory
E0601 12:02:54.127157   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601113004-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:03:03.732288   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601113006-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:03:11.572460   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601115057-16804/client.crt: no such file or directory
E0601 12:03:14.580616   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:04:07.869867   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:04:26.844886   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601113006-16804/client.crt: no such file or directory
E0601 12:04:32.631688   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601113006-16804/client.crt: no such file or directory
E0601 12:04:33.493800   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601115057-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:04:38.183483   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601112852-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:05:20.693445   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601113004-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:06:13.993582   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601113005-16804/client.crt: no such file or directory
E0601 12:06:22.087893   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601113004-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:06:23.695249   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601113005-16804/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:06:49.636804   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601115057-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:07:17.332068   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601115057-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:07:41.241574   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601112852-16804/client.crt: no such file or directory
E0601 12:07:45.140860   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601113004-16804/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:07:54.122292   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601113004-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:08:03.728562   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601113006-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:08:14.576882   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:09:07.865300   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:09:17.177681   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601113004-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:09:32.639231   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601113006-16804/client.crt: no such file or directory
E0601 12:09:38.190119   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601112852-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:10:20.703627   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601113004-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:10:30.939759   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:10:44.422890   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601113004-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:11:14.004874   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601113005-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:11:22.101228   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601113004-16804/client.crt: no such file or directory
E0601 12:11:23.707287   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601113005-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
start_stop_delete_test.go:276: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:276: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220601114806-16804 -n old-k8s-version-20220601114806-16804
start_stop_delete_test.go:276: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220601114806-16804 -n old-k8s-version-20220601114806-16804: exit status 2 (451.258286ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:276: status error: exit status 2 (may be ok)
start_stop_delete_test.go:276: "old-k8s-version-20220601114806-16804" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:277: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220601114806-16804
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220601114806-16804:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273",
	        "Created": "2022-06-01T18:48:12.461821519Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 212829,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T18:53:51.165763227Z",
	            "FinishedAt": "2022-06-01T18:53:48.32715559Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273/hostname",
	        "HostsPath": "/var/lib/docker/containers/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273/hosts",
	        "LogPath": "/var/lib/docker/containers/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273-json.log",
	        "Name": "/old-k8s-version-20220601114806-16804",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220601114806-16804:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220601114806-16804",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/34025968d17a5ea4a956d84b5a5a083525af3a67c56680691bf072548c5ecfc2-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/34025968d17a5ea4a956d84b5a5a083525af3a67c56680691bf072548c5ecfc2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/34025968d17a5ea4a956d84b5a5a083525af3a67c56680691bf072548c5ecfc2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/34025968d17a5ea4a956d84b5a5a083525af3a67c56680691bf072548c5ecfc2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220601114806-16804",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220601114806-16804/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220601114806-16804",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220601114806-16804",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220601114806-16804",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "df15676c71a0eb8c1755841478abd978fa8d8f53d24ceed344774583d711d893",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59947"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59948"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59944"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59945"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59946"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/df15676c71a0",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220601114806-16804": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ff69f8f777d8",
	                        "old-k8s-version-20220601114806-16804"
	                    ],
	                    "NetworkID": "246cf6a028e4e11a14e92d87f31441d673c4de3a42936ed926f0c32bee110562",
	                    "EndpointID": "248cec2b4960c9be6d236f5305db55c60b48dd57301f892e0015a2ab70c18ccf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220601114806-16804 -n old-k8s-version-20220601114806-16804
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220601114806-16804 -n old-k8s-version-20220601114806-16804: exit status 2 (558.290125ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-20220601114806-16804 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-20220601114806-16804 logs -n 25: (3.506811497s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                     Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p                                                | no-preload-20220601115057-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | no-preload-20220601115057-16804                   |                                                 |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |                |                     |                     |
	| pause   | -p                                                | no-preload-20220601115057-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | no-preload-20220601115057-16804                   |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |                |                     |                     |
	| unpause | -p                                                | no-preload-20220601115057-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | no-preload-20220601115057-16804                   |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |                |                     |                     |
	| logs    | no-preload-20220601115057-16804                   | no-preload-20220601115057-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | logs -n 25                                        |                                                 |         |                |                     |                     |
	| logs    | no-preload-20220601115057-16804                   | no-preload-20220601115057-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | logs -n 25                                        |                                                 |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220601115057-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | no-preload-20220601115057-16804                   |                                                 |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220601115057-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | no-preload-20220601115057-16804                   |                                                 |         |                |                     |                     |
	| start   | -p                                                | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:59 PDT |
	|         | embed-certs-20220601115855-16804                  |                                                 |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |                |                     |                     |
	|         | --wait=true --embed-certs                         |                                                 |         |                |                     |                     |
	|         | --driver=docker                                   |                                                 |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                 |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:59 PDT | 01 Jun 22 11:59 PDT |
	|         | embed-certs-20220601115855-16804                  |                                                 |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |                |                     |                     |
	| stop    | -p                                                | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:59 PDT | 01 Jun 22 11:59 PDT |
	|         | embed-certs-20220601115855-16804                  |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |                |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:59 PDT | 01 Jun 22 11:59 PDT |
	|         | embed-certs-20220601115855-16804                  |                                                 |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |                |                     |                     |
	| logs    | old-k8s-version-20220601114806-16804              | old-k8s-version-20220601114806-16804            | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:01 PDT | 01 Jun 22 12:02 PDT |
	|         | logs -n 25                                        |                                                 |         |                |                     |                     |
	| start   | -p                                                | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:59 PDT | 01 Jun 22 12:05 PDT |
	|         | embed-certs-20220601115855-16804                  |                                                 |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |                |                     |                     |
	|         | --wait=true --embed-certs                         |                                                 |         |                |                     |                     |
	|         | --driver=docker                                   |                                                 |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                 |         |                |                     |                     |
	| ssh     | -p                                                | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:05 PDT | 01 Jun 22 12:05 PDT |
	|         | embed-certs-20220601115855-16804                  |                                                 |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |                |                     |                     |
	| pause   | -p                                                | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:05 PDT | 01 Jun 22 12:05 PDT |
	|         | embed-certs-20220601115855-16804                  |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |                |                     |                     |
	| unpause | -p                                                | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:06 PDT |
	|         | embed-certs-20220601115855-16804                  |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |                |                     |                     |
	| logs    | embed-certs-20220601115855-16804                  | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:06 PDT |
	|         | logs -n 25                                        |                                                 |         |                |                     |                     |
	| logs    | embed-certs-20220601115855-16804                  | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:06 PDT |
	|         | logs -n 25                                        |                                                 |         |                |                     |                     |
	| delete  | -p                                                | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:06 PDT |
	|         | embed-certs-20220601115855-16804                  |                                                 |         |                |                     |                     |
	| delete  | -p                                                | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:06 PDT |
	|         | embed-certs-20220601115855-16804                  |                                                 |         |                |                     |                     |
	| delete  | -p                                                | disable-driver-mounts-20220601120640-16804      | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:06 PDT |
	|         | disable-driver-mounts-20220601120640-16804        |                                                 |         |                |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:07 PDT |
	|         | default-k8s-different-port-20220601120641-16804   |                                                 |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                 |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:07 PDT | 01 Jun 22 12:07 PDT |
	|         | default-k8s-different-port-20220601120641-16804   |                                                 |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |                |                     |                     |
	| stop    | -p                                                | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:07 PDT | 01 Jun 22 12:07 PDT |
	|         | default-k8s-different-port-20220601120641-16804   |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |                |                     |                     |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:07 PDT | 01 Jun 22 12:07 PDT |
	|         | default-k8s-different-port-20220601120641-16804   |                                                 |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |                |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 12:07:46
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 12:07:46.714016   29448 out.go:296] Setting OutFile to fd 1 ...
	I0601 12:07:46.714188   29448 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 12:07:46.714193   29448 out.go:309] Setting ErrFile to fd 2...
	I0601 12:07:46.714197   29448 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 12:07:46.714298   29448 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 12:07:46.714566   29448 out.go:303] Setting JSON to false
	I0601 12:07:46.729641   29448 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":9436,"bootTime":1654101030,"procs":353,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 12:07:46.729740   29448 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 12:07:46.752140   29448 out.go:177] * [default-k8s-different-port-20220601120641-16804] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 12:07:46.795651   29448 notify.go:193] Checking for updates...
	I0601 12:07:46.817606   29448 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 12:07:46.839623   29448 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 12:07:46.860438   29448 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 12:07:46.881832   29448 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 12:07:46.903780   29448 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 12:07:46.926112   29448 config.go:178] Loaded profile config "default-k8s-different-port-20220601120641-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 12:07:46.926797   29448 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 12:07:47.000143   29448 docker.go:137] docker version: linux-20.10.14
	I0601 12:07:47.000297   29448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 12:07:47.131409   29448 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 19:07:47.070748627 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 12:07:47.153289   29448 out.go:177] * Using the docker driver based on existing profile
	I0601 12:07:47.173957   29448 start.go:284] selected driver: docker
	I0601 12:07:47.173973   29448 start.go:806] validating driver "docker" against &{Name:default-k8s-different-port-20220601120641-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port
-20220601120641-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:tru
e] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 12:07:47.174080   29448 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 12:07:47.176304   29448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 12:07:47.306114   29448 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 19:07:47.248401536 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 12:07:47.306271   29448 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 12:07:47.306288   29448 cni.go:95] Creating CNI manager for ""
	I0601 12:07:47.306295   29448 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 12:07:47.306302   29448 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220601120641-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601120641-16804 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 12:07:47.350023   29448 out.go:177] * Starting control plane node default-k8s-different-port-20220601120641-16804 in cluster default-k8s-different-port-20220601120641-16804
	I0601 12:07:47.372290   29448 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 12:07:47.393752   29448 out.go:177] * Pulling base image ...
	I0601 12:07:47.437193   29448 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 12:07:47.437222   29448 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 12:07:47.437287   29448 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 12:07:47.437322   29448 cache.go:57] Caching tarball of preloaded images
	I0601 12:07:47.437521   29448 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 12:07:47.437544   29448 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 12:07:47.438529   29448 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/config.json ...
	I0601 12:07:47.502152   29448 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 12:07:47.502172   29448 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 12:07:47.502180   29448 cache.go:206] Successfully downloaded all kic artifacts
	I0601 12:07:47.502221   29448 start.go:352] acquiring machines lock for default-k8s-different-port-20220601120641-16804: {Name:mk5000a48e15938a8ff193f7b1e0ef0205ca69c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 12:07:47.502307   29448 start.go:356] acquired machines lock for "default-k8s-different-port-20220601120641-16804" in 54.718µs
	I0601 12:07:47.502327   29448 start.go:94] Skipping create...Using existing machine configuration
	I0601 12:07:47.502337   29448 fix.go:55] fixHost starting: 
	I0601 12:07:47.502581   29448 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601120641-16804 --format={{.State.Status}}
	I0601 12:07:47.570243   29448 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220601120641-16804: state=Stopped err=<nil>
	W0601 12:07:47.570270   29448 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 12:07:47.592778   29448 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220601120641-16804" ...
	I0601 12:07:47.614873   29448 cli_runner.go:164] Run: docker start default-k8s-different-port-20220601120641-16804
	I0601 12:07:47.973167   29448 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601120641-16804 --format={{.State.Status}}
	I0601 12:07:48.048677   29448 kic.go:416] container "default-k8s-different-port-20220601120641-16804" state is running.
	I0601 12:07:48.049618   29448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601120641-16804
	I0601 12:07:48.132914   29448 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/config.json ...
	I0601 12:07:48.133339   29448 machine.go:88] provisioning docker machine ...
	I0601 12:07:48.133364   29448 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220601120641-16804"
	I0601 12:07:48.133419   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:48.212122   29448 main.go:134] libmachine: Using SSH client type: native
	I0601 12:07:48.212345   29448 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 61977 <nil> <nil>}
	I0601 12:07:48.212357   29448 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220601120641-16804 && echo "default-k8s-different-port-20220601120641-16804" | sudo tee /etc/hostname
	I0601 12:07:48.344170   29448 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220601120641-16804
	
	I0601 12:07:48.344259   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:48.422969   29448 main.go:134] libmachine: Using SSH client type: native
	I0601 12:07:48.423135   29448 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 61977 <nil> <nil>}
	I0601 12:07:48.423162   29448 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220601120641-16804' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220601120641-16804/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220601120641-16804' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 12:07:48.544579   29448 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 12:07:48.544600   29448 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 12:07:48.544627   29448 ubuntu.go:177] setting up certificates
	I0601 12:07:48.544647   29448 provision.go:83] configureAuth start
	I0601 12:07:48.544718   29448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601120641-16804
	I0601 12:07:48.622717   29448 provision.go:138] copyHostCerts
	I0601 12:07:48.622832   29448 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 12:07:48.622842   29448 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 12:07:48.622937   29448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 12:07:48.623147   29448 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 12:07:48.623156   29448 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 12:07:48.623223   29448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 12:07:48.623375   29448 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 12:07:48.623383   29448 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 12:07:48.623455   29448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1675 bytes)
	I0601 12:07:48.623608   29448 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220601120641-16804 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220601120641-16804]
	I0601 12:07:48.807397   29448 provision.go:172] copyRemoteCerts
	I0601 12:07:48.807465   29448 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 12:07:48.807513   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:48.880166   29448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61977 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601120641-16804/id_rsa Username:docker}
	I0601 12:07:48.968528   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 12:07:48.985675   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0601 12:07:49.003100   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0601 12:07:49.020622   29448 provision.go:86] duration metric: configureAuth took 475.965498ms
	I0601 12:07:49.020634   29448 ubuntu.go:193] setting minikube options for container-runtime
	I0601 12:07:49.020838   29448 config.go:178] Loaded profile config "default-k8s-different-port-20220601120641-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 12:07:49.020914   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:49.093673   29448 main.go:134] libmachine: Using SSH client type: native
	I0601 12:07:49.093829   29448 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 61977 <nil> <nil>}
	I0601 12:07:49.093841   29448 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 12:07:49.210457   29448 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 12:07:49.210469   29448 ubuntu.go:71] root file system type: overlay
	I0601 12:07:49.210594   29448 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 12:07:49.210662   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:49.283147   29448 main.go:134] libmachine: Using SSH client type: native
	I0601 12:07:49.283317   29448 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 61977 <nil> <nil>}
	I0601 12:07:49.283384   29448 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 12:07:49.409302   29448 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 12:07:49.409387   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:49.484156   29448 main.go:134] libmachine: Using SSH client type: native
	I0601 12:07:49.484332   29448 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 61977 <nil> <nil>}
	I0601 12:07:49.484346   29448 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 12:07:49.604444   29448 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 12:07:49.604461   29448 machine.go:91] provisioned docker machine in 1.471129128s
	I0601 12:07:49.604471   29448 start.go:306] post-start starting for "default-k8s-different-port-20220601120641-16804" (driver="docker")
	I0601 12:07:49.604477   29448 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 12:07:49.604532   29448 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 12:07:49.604575   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:49.678684   29448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61977 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601120641-16804/id_rsa Username:docker}
	I0601 12:07:49.764315   29448 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 12:07:49.767903   29448 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 12:07:49.767938   29448 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 12:07:49.767950   29448 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 12:07:49.767956   29448 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 12:07:49.767967   29448 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 12:07:49.768069   29448 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 12:07:49.768203   29448 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem -> 168042.pem in /etc/ssl/certs
	I0601 12:07:49.768341   29448 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 12:07:49.775308   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /etc/ssl/certs/168042.pem (1708 bytes)
	I0601 12:07:49.792544   29448 start.go:309] post-start completed in 188.064447ms
	I0601 12:07:49.792632   29448 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 12:07:49.792692   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:49.865476   29448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61977 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601120641-16804/id_rsa Username:docker}
	I0601 12:07:49.948688   29448 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 12:07:49.953166   29448 fix.go:57] fixHost completed within 2.450856104s
	I0601 12:07:49.953184   29448 start.go:81] releasing machines lock for "default-k8s-different-port-20220601120641-16804", held for 2.450894668s
	I0601 12:07:49.953267   29448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601120641-16804
	I0601 12:07:50.025599   29448 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 12:07:50.025607   29448 ssh_runner.go:195] Run: systemctl --version
	I0601 12:07:50.025662   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:50.025679   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:50.104052   29448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61977 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601120641-16804/id_rsa Username:docker}
	I0601 12:07:50.107382   29448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61977 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601120641-16804/id_rsa Username:docker}
	I0601 12:07:50.327885   29448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 12:07:50.339355   29448 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 12:07:50.349315   29448 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 12:07:50.349405   29448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 12:07:50.358883   29448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 12:07:50.372373   29448 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 12:07:50.437808   29448 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 12:07:50.507439   29448 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 12:07:50.517896   29448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 12:07:50.595253   29448 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 12:07:50.605452   29448 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 12:07:50.643710   29448 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 12:07:50.725451   29448 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0601 12:07:50.725679   29448 cli_runner.go:164] Run: docker exec -t default-k8s-different-port-20220601120641-16804 dig +short host.docker.internal
	I0601 12:07:50.872144   29448 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 12:07:50.872230   29448 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 12:07:50.877033   29448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 12:07:50.888570   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:50.963632   29448 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 12:07:50.963715   29448 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 12:07:50.995814   29448 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0601 12:07:50.995829   29448 docker.go:541] Images already preloaded, skipping extraction
	I0601 12:07:50.995925   29448 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 12:07:51.027279   29448 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0601 12:07:51.027298   29448 cache_images.go:84] Images are preloaded, skipping loading
	I0601 12:07:51.027382   29448 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 12:07:51.102384   29448 cni.go:95] Creating CNI manager for ""
	I0601 12:07:51.102395   29448 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 12:07:51.102413   29448 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 12:07:51.102444   29448 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8444 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220601120641-16804 NodeName:default-k8s-different-port-20220601120641-16804 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 Cgro
upDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 12:07:51.102543   29448 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "default-k8s-different-port-20220601120641-16804"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 12:07:51.102663   29448 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=default-k8s-different-port-20220601120641-16804 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601120641-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0601 12:07:51.102752   29448 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 12:07:51.110604   29448 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 12:07:51.110649   29448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 12:07:51.117796   29448 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0601 12:07:51.130564   29448 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 12:07:51.142900   29448 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2068 bytes)
	I0601 12:07:51.156672   29448 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0601 12:07:51.160712   29448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 12:07:51.170404   29448 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804 for IP: 192.168.58.2
	I0601 12:07:51.170523   29448 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 12:07:51.170574   29448 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 12:07:51.170655   29448 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/client.key
	I0601 12:07:51.170735   29448 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/apiserver.key.cee25041
	I0601 12:07:51.170798   29448 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/proxy-client.key
	I0601 12:07:51.170999   29448 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem (1338 bytes)
	W0601 12:07:51.171039   29448 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804_empty.pem, impossibly tiny 0 bytes
	I0601 12:07:51.171051   29448 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1675 bytes)
	I0601 12:07:51.171085   29448 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 12:07:51.171121   29448 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 12:07:51.171151   29448 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1675 bytes)
	I0601 12:07:51.171217   29448 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem (1708 bytes)
	I0601 12:07:51.171773   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 12:07:51.189027   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 12:07:51.206439   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 12:07:51.223833   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0601 12:07:51.241255   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 12:07:51.258545   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 12:07:51.275931   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 12:07:51.293213   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0601 12:07:51.310440   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /usr/share/ca-certificates/168042.pem (1708 bytes)
	I0601 12:07:51.327345   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 12:07:51.344962   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem --> /usr/share/ca-certificates/16804.pem (1338 bytes)
	I0601 12:07:51.362940   29448 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 12:07:51.376186   29448 ssh_runner.go:195] Run: openssl version
	I0601 12:07:51.381980   29448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16804.pem && ln -fs /usr/share/ca-certificates/16804.pem /etc/ssl/certs/16804.pem"
	I0601 12:07:51.389866   29448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16804.pem
	I0601 12:07:51.393905   29448 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 18:01 /usr/share/ca-certificates/16804.pem
	I0601 12:07:51.393948   29448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16804.pem
	I0601 12:07:51.400411   29448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16804.pem /etc/ssl/certs/51391683.0"
	I0601 12:07:51.408002   29448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168042.pem && ln -fs /usr/share/ca-certificates/168042.pem /etc/ssl/certs/168042.pem"
	I0601 12:07:51.415937   29448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168042.pem
	I0601 12:07:51.420272   29448 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 18:01 /usr/share/ca-certificates/168042.pem
	I0601 12:07:51.420316   29448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168042.pem
	I0601 12:07:51.426141   29448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168042.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 12:07:51.433640   29448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 12:07:51.442012   29448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 12:07:51.446045   29448 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0601 12:07:51.446083   29448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 12:07:51.451363   29448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 12:07:51.459039   29448 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220601120641-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601120641-1680
4 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 12:07:51.459130   29448 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 12:07:51.489623   29448 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 12:07:51.497725   29448 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 12:07:51.497738   29448 kubeadm.go:626] restartCluster start
	I0601 12:07:51.497782   29448 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 12:07:51.504873   29448 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:51.504936   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:51.581506   29448 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220601120641-16804" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 12:07:51.581714   29448 kubeconfig.go:127] "default-k8s-different-port-20220601120641-16804" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 12:07:51.582051   29448 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk924f4ba24fa74a0cb052299e0cc4e825b209a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 12:07:51.583188   29448 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 12:07:51.591318   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:51.591366   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:51.600659   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:51.802801   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:51.803000   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:51.813919   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:52.002131   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:52.002333   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:52.013050   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:52.202762   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:52.203003   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:52.214293   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:52.400742   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:52.401039   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:52.413371   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:52.602798   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:52.603014   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:52.614030   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:52.802753   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:52.802902   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:52.813453   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:53.002110   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:53.002210   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:53.012890   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:53.200734   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:53.200808   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:53.209534   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:53.402797   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:53.402935   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:53.413942   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:53.602772   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:53.602954   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:53.614236   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:53.802625   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:53.802807   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:53.813315   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:54.000805   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:54.000963   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:54.011753   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:54.201071   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:54.201206   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:54.210732   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:54.401125   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:54.401238   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:54.411188   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:54.601290   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:54.601393   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:54.611951   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:54.611961   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:54.612012   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:54.620879   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:54.620892   29448 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 12:07:54.620904   29448 kubeadm.go:1092] stopping kube-system containers ...
	I0601 12:07:54.620958   29448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 12:07:54.654891   29448 docker.go:442] Stopping containers: [7328817f3bb4 d3f44f8f8e39 134c635592c8 46d8169c54fd a771108a72ba a3f49451d3a0 607c9ad659d0 8a911f22f085 e379e0b74a15 d25d7a042066 b1e1d206888c 93f762382a29 715955d40c64 a75eb9d31e2c b1116ac2ed18 30914a4918f1]
	I0601 12:07:54.654963   29448 ssh_runner.go:195] Run: docker stop 7328817f3bb4 d3f44f8f8e39 134c635592c8 46d8169c54fd a771108a72ba a3f49451d3a0 607c9ad659d0 8a911f22f085 e379e0b74a15 d25d7a042066 b1e1d206888c 93f762382a29 715955d40c64 a75eb9d31e2c b1116ac2ed18 30914a4918f1
	I0601 12:07:54.686689   29448 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 12:07:54.699901   29448 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 12:07:54.707795   29448 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun  1 19:06 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jun  1 19:06 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Jun  1 19:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jun  1 19:06 /etc/kubernetes/scheduler.conf
	
	I0601 12:07:54.707845   29448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0601 12:07:54.716136   29448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0601 12:07:54.724080   29448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0601 12:07:54.731538   29448 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:54.731581   29448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0601 12:07:54.738546   29448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0601 12:07:54.745577   29448 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:54.745680   29448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0601 12:07:54.752549   29448 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 12:07:54.759719   29448 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 12:07:54.759733   29448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:07:54.804484   29448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:07:55.824635   29448 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.020145427s)
	I0601 12:07:55.824694   29448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:07:55.951077   29448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:07:56.004952   29448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:07:56.056152   29448 api_server.go:51] waiting for apiserver process to appear ...
	I0601 12:07:56.056230   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:07:56.577348   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:07:57.077226   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:07:57.577374   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:07:57.591888   29448 api_server.go:71] duration metric: took 1.535763125s to wait for apiserver process to appear ...
	I0601 12:07:57.591909   29448 api_server.go:87] waiting for apiserver healthz status ...
	I0601 12:07:57.591919   29448 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61981/healthz ...
	I0601 12:08:00.189999   29448 api_server.go:266] https://127.0.0.1:61981/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0601 12:08:00.190016   29448 api_server.go:102] status: https://127.0.0.1:61981/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0601 12:08:00.691046   29448 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61981/healthz ...
	I0601 12:08:00.696359   29448 api_server.go:266] https://127.0.0.1:61981/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 12:08:00.696371   29448 api_server.go:102] status: https://127.0.0.1:61981/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 12:08:01.190364   29448 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61981/healthz ...
	I0601 12:08:01.197216   29448 api_server.go:266] https://127.0.0.1:61981/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 12:08:01.197234   29448 api_server.go:102] status: https://127.0.0.1:61981/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 12:08:01.692213   29448 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61981/healthz ...
	I0601 12:08:01.699656   29448 api_server.go:266] https://127.0.0.1:61981/healthz returned 200:
	ok
	I0601 12:08:01.706073   29448 api_server.go:140] control plane version: v1.23.6
	I0601 12:08:01.706084   29448 api_server.go:130] duration metric: took 4.11422006s to wait for apiserver health ...
	I0601 12:08:01.706092   29448 cni.go:95] Creating CNI manager for ""
	I0601 12:08:01.706097   29448 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 12:08:01.706108   29448 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 12:08:01.712337   29448 system_pods.go:59] 8 kube-system pods found
	I0601 12:08:01.712354   29448 system_pods.go:61] "coredns-64897985d-v5l86" [cebeba0e-d16c-4439-973e-3ddc9003cc40] Running
	I0601 12:08:01.712358   29448 system_pods.go:61] "etcd-default-k8s-different-port-20220601120641-16804" [c387f857-e5ff-45bd-b88c-09e06c1626b3] Running
	I0601 12:08:01.712366   29448 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220601120641-16804" [b256af8c-900c-49b6-b749-7d33ef7179e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0601 12:08:01.712376   29448 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220601120641-16804" [4dbe125a-f3ba-4200-85cb-744388b849ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0601 12:08:01.712381   29448 system_pods.go:61] "kube-proxy-7kqlg" [c5fea19e-e60f-4b90-b2e0-76618c2b78cc] Running
	I0601 12:08:01.712387   29448 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220601120641-16804" [cde39bae-3f41-4858-a543-60f81bff3509] Running
	I0601 12:08:01.712391   29448 system_pods.go:61] "metrics-server-b955d9d8-48tdv" [0c245d32-4061-4d02-b798-d0766b893fc6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 12:08:01.712395   29448 system_pods.go:61] "storage-provisioner" [e70fe26d-b8cb-4d3d-8e22-76d353fcb4c8] Running
	I0601 12:08:01.712399   29448 system_pods.go:74] duration metric: took 6.286581ms to wait for pod list to return data ...
	I0601 12:08:01.712405   29448 node_conditions.go:102] verifying NodePressure condition ...
	I0601 12:08:01.715083   29448 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 12:08:01.735735   29448 node_conditions.go:123] node cpu capacity is 6
	I0601 12:08:01.735751   29448 node_conditions.go:105] duration metric: took 23.342838ms to run NodePressure ...
	I0601 12:08:01.735781   29448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:08:01.859703   29448 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0601 12:08:01.863929   29448 kubeadm.go:777] kubelet initialised
	I0601 12:08:01.863940   29448 kubeadm.go:778] duration metric: took 4.22226ms waiting for restarted kubelet to initialise ...
	I0601 12:08:01.863948   29448 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 12:08:01.874140   29448 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-v5l86" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:01.878366   29448 pod_ready.go:92] pod "coredns-64897985d-v5l86" in "kube-system" namespace has status "Ready":"True"
	I0601 12:08:01.878375   29448 pod_ready.go:81] duration metric: took 4.22218ms waiting for pod "coredns-64897985d-v5l86" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:01.878381   29448 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:01.883193   29448 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:08:01.883207   29448 pod_ready.go:81] duration metric: took 4.820642ms waiting for pod "etcd-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:01.883218   29448 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:03.899247   29448 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:06.396930   29448 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:08.397693   29448 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:10.899832   29448 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:13.396683   29448 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:15.397145   29448 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:08:15.397158   29448 pod_ready.go:81] duration metric: took 13.514096644s waiting for pod "kube-apiserver-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:15.397165   29448 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:15.401295   29448 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:08:15.401303   29448 pod_ready.go:81] duration metric: took 4.132737ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:15.401309   29448 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7kqlg" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:15.405245   29448 pod_ready.go:92] pod "kube-proxy-7kqlg" in "kube-system" namespace has status "Ready":"True"
	I0601 12:08:15.405253   29448 pod_ready.go:81] duration metric: took 3.9394ms waiting for pod "kube-proxy-7kqlg" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:15.405259   29448 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:15.409049   29448 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:08:15.409056   29448 pod_ready.go:81] duration metric: took 3.792078ms waiting for pod "kube-scheduler-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:15.409061   29448 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:17.421198   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:19.921779   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:21.921963   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:24.419625   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:26.918715   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:28.920464   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:31.417510   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:33.421585   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:35.919309   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:37.919425   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:39.921636   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:42.419249   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:44.421280   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:46.919320   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:48.919646   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:51.419182   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:53.919801   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:55.921377   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:58.419040   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:00.420223   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:02.919098   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:05.422270   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:07.920676   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:09.921475   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:12.421183   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:14.423686   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:16.925551   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:18.926973   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:21.427812   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:23.428681   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:25.929071   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:28.428550   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:30.931471   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:33.429632   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:35.430443   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:37.431177   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:39.933430   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:42.430572   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:44.430931   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:46.434046   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:48.933873   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:51.431937   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:53.933106   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:56.432902   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:58.934520   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:01.433804   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:03.933118   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:05.934862   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:08.433670   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:10.933334   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:12.934779   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:15.433922   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:17.932737   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:19.934008   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:22.433285   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:24.933318   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:26.933678   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:29.431649   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:31.933315   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:34.433040   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:36.934014   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:39.432853   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:41.934681   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:44.432653   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:46.432803   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:48.932523   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:50.933280   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:53.433228   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:55.933454   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:57.933536   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:59.933863   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:02.432410   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:04.435185   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:06.435392   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:08.934115   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:10.934791   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:13.434833   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:15.934016   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:18.431815   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:20.434177   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:22.932827   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:24.934570   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:27.432774   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:29.433296   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-01 18:53:51 UTC, end at Wed 2022-06-01 19:11:32 UTC. --
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 systemd[1]: Starting Docker Application Container Engine...
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.457800407Z" level=info msg="Starting up"
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.459880544Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.459918540Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.459935542Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.459943396Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.461558394Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.461592263Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.461607683Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.461615678Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.467062010Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.471139789Z" level=info msg="Loading containers: start."
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.555493702Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.587145357Z" level=info msg="Loading containers: done."
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.597281456Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.597355151Z" level=info msg="Daemon has completed initialization"
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 systemd[1]: Started Docker Application Container Engine.
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.622139295Z" level=info msg="API listen on [::]:2376"
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.626019498Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2022-06-01T19:11:34Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  19:11:35 up  1:14,  0 users,  load average: 0.56, 0.59, 0.73
	Linux old-k8s-version-20220601114806-16804 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 18:53:51 UTC, end at Wed 2022-06-01 19:11:35 UTC. --
	Jun 01 19:11:33 old-k8s-version-20220601114806-16804 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 01 19:11:34 old-k8s-version-20220601114806-16804 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 928.
	Jun 01 19:11:34 old-k8s-version-20220601114806-16804 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 01 19:11:34 old-k8s-version-20220601114806-16804 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 01 19:11:34 old-k8s-version-20220601114806-16804 kubelet[24413]: I0601 19:11:34.281457   24413 server.go:410] Version: v1.16.0
	Jun 01 19:11:34 old-k8s-version-20220601114806-16804 kubelet[24413]: I0601 19:11:34.281705   24413 plugins.go:100] No cloud provider specified.
	Jun 01 19:11:34 old-k8s-version-20220601114806-16804 kubelet[24413]: I0601 19:11:34.281740   24413 server.go:773] Client rotation is on, will bootstrap in background
	Jun 01 19:11:34 old-k8s-version-20220601114806-16804 kubelet[24413]: I0601 19:11:34.283444   24413 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 01 19:11:34 old-k8s-version-20220601114806-16804 kubelet[24413]: W0601 19:11:34.284118   24413 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jun 01 19:11:34 old-k8s-version-20220601114806-16804 kubelet[24413]: W0601 19:11:34.284179   24413 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jun 01 19:11:34 old-k8s-version-20220601114806-16804 kubelet[24413]: F0601 19:11:34.284202   24413 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jun 01 19:11:34 old-k8s-version-20220601114806-16804 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 01 19:11:34 old-k8s-version-20220601114806-16804 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 01 19:11:34 old-k8s-version-20220601114806-16804 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 929.
	Jun 01 19:11:34 old-k8s-version-20220601114806-16804 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 01 19:11:34 old-k8s-version-20220601114806-16804 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 01 19:11:35 old-k8s-version-20220601114806-16804 kubelet[24444]: I0601 19:11:35.035593   24444 server.go:410] Version: v1.16.0
	Jun 01 19:11:35 old-k8s-version-20220601114806-16804 kubelet[24444]: I0601 19:11:35.035839   24444 plugins.go:100] No cloud provider specified.
	Jun 01 19:11:35 old-k8s-version-20220601114806-16804 kubelet[24444]: I0601 19:11:35.035851   24444 server.go:773] Client rotation is on, will bootstrap in background
	Jun 01 19:11:35 old-k8s-version-20220601114806-16804 kubelet[24444]: I0601 19:11:35.037548   24444 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 01 19:11:35 old-k8s-version-20220601114806-16804 kubelet[24444]: W0601 19:11:35.038186   24444 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jun 01 19:11:35 old-k8s-version-20220601114806-16804 kubelet[24444]: W0601 19:11:35.038247   24444 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jun 01 19:11:35 old-k8s-version-20220601114806-16804 kubelet[24444]: F0601 19:11:35.038303   24444 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jun 01 19:11:35 old-k8s-version-20220601114806-16804 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 01 19:11:35 old-k8s-version-20220601114806-16804 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 12:11:34.911061   29589 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220601114806-16804 -n old-k8s-version-20220601114806-16804
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220601114806-16804 -n old-k8s-version-20220601114806-16804: exit status 2 (468.962944ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-20220601114806-16804" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (575.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (44.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-20220601115855-16804 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220601115855-16804 -n embed-certs-20220601115855-16804
E0601 12:05:55.689372   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601113006-16804/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220601115855-16804 -n embed-certs-20220601115855-16804: exit status 2 (16.116214149s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220601115855-16804 -n embed-certs-20220601115855-16804

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220601115855-16804 -n embed-certs-20220601115855-16804: exit status 2 (16.114763987s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-20220601115855-16804 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Done: out/minikube-darwin-amd64 unpause -p embed-certs-20220601115855-16804 --alsologtostderr -v=1: (1.188318356s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220601115855-16804 -n embed-certs-20220601115855-16804
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220601115855-16804 -n embed-certs-20220601115855-16804
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220601115855-16804
helpers_test.go:235: (dbg) docker inspect embed-certs-20220601115855-16804:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "daff3bf0eba492c90056ce45176d631d185b87d88a61717d4e753c328f7d8784",
	        "Created": "2022-06-01T18:59:02.28302225Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 235321,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T19:00:00.971874149Z",
	            "FinishedAt": "2022-06-01T18:59:59.00759091Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/daff3bf0eba492c90056ce45176d631d185b87d88a61717d4e753c328f7d8784/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/daff3bf0eba492c90056ce45176d631d185b87d88a61717d4e753c328f7d8784/hostname",
	        "HostsPath": "/var/lib/docker/containers/daff3bf0eba492c90056ce45176d631d185b87d88a61717d4e753c328f7d8784/hosts",
	        "LogPath": "/var/lib/docker/containers/daff3bf0eba492c90056ce45176d631d185b87d88a61717d4e753c328f7d8784/daff3bf0eba492c90056ce45176d631d185b87d88a61717d4e753c328f7d8784-json.log",
	        "Name": "/embed-certs-20220601115855-16804",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220601115855-16804:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220601115855-16804",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/07129088b088e47c009d6b43ee52c51985bc4af006235bc2ac0c38d05bac4b16-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/07129088b088e47c009d6b43ee52c51985bc4af006235bc2ac0c38d05bac4b16/merged",
	                "UpperDir": "/var/lib/docker/overlay2/07129088b088e47c009d6b43ee52c51985bc4af006235bc2ac0c38d05bac4b16/diff",
	                "WorkDir": "/var/lib/docker/overlay2/07129088b088e47c009d6b43ee52c51985bc4af006235bc2ac0c38d05bac4b16/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220601115855-16804",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220601115855-16804/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220601115855-16804",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220601115855-16804",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220601115855-16804",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5be6800116ac1a8a8437205abd9ac248a5c246bb27fddf3f127842a92f323157",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60747"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60748"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60749"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60745"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60746"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5be6800116ac",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220601115855-16804": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "daff3bf0eba4",
	                        "embed-certs-20220601115855-16804"
	                    ],
	                    "NetworkID": "14104cac19a3970344e7e464fdc2a9525956f5dfe25aebc2916d1b0f0bef30de",
	                    "EndpointID": "3b1416234cb89aaef25eda8d72cf7dbc0b022d15bfd5613484628b57f77b3ac3",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220601115855-16804 -n embed-certs-20220601115855-16804
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-20220601115855-16804 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p embed-certs-20220601115855-16804 logs -n 25: (3.003102579s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-----------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                 Profile                 |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-----------------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p                                                | enable-default-cni-20220601113004-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:50 PDT | 01 Jun 22 11:50 PDT |
	|         | enable-default-cni-20220601113004-16804           |                                         |         |                |                     |                     |
	|         | pgrep -a kubelet                                  |                                         |         |                |                     |                     |
	| delete  | -p                                                | enable-default-cni-20220601113004-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:50 PDT | 01 Jun 22 11:50 PDT |
	|         | enable-default-cni-20220601113004-16804           |                                         |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:50 PDT | 01 Jun 22 11:51 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |                |                     |                     |
	|         | --driver=docker                                   |                                         |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                         |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:51 PDT | 01 Jun 22 11:51 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |                |                     |                     |
	| stop    | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:51 PDT | 01 Jun 22 11:52 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |                |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:52 PDT | 01 Jun 22 11:52 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |                |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220601114806-16804    | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:53 PDT | 01 Jun 22 11:53 PDT |
	|         | old-k8s-version-20220601114806-16804              |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |                |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220601114806-16804    | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:53 PDT | 01 Jun 22 11:53 PDT |
	|         | old-k8s-version-20220601114806-16804              |                                         |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:52 PDT | 01 Jun 22 11:57 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |                |                     |                     |
	|         | --driver=docker                                   |                                         |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                         |         |                |                     |                     |
	| ssh     | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                         |         |                |                     |                     |
	| pause   | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |                |                     |                     |
	| unpause | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |                |                     |                     |
	| logs    | no-preload-20220601115057-16804                   | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | logs -n 25                                        |                                         |         |                |                     |                     |
	| logs    | no-preload-20220601115057-16804                   | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | logs -n 25                                        |                                         |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	| start   | -p                                                | embed-certs-20220601115855-16804        | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:59 PDT |
	|         | embed-certs-20220601115855-16804                  |                                         |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |                |                     |                     |
	|         | --wait=true --embed-certs                         |                                         |         |                |                     |                     |
	|         | --driver=docker                                   |                                         |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                         |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220601115855-16804        | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:59 PDT | 01 Jun 22 11:59 PDT |
	|         | embed-certs-20220601115855-16804                  |                                         |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |                |                     |                     |
	| stop    | -p                                                | embed-certs-20220601115855-16804        | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:59 PDT | 01 Jun 22 11:59 PDT |
	|         | embed-certs-20220601115855-16804                  |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |                |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220601115855-16804        | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:59 PDT | 01 Jun 22 11:59 PDT |
	|         | embed-certs-20220601115855-16804                  |                                         |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |                |                     |                     |
	| logs    | old-k8s-version-20220601114806-16804              | old-k8s-version-20220601114806-16804    | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:01 PDT | 01 Jun 22 12:02 PDT |
	|         | logs -n 25                                        |                                         |         |                |                     |                     |
	| start   | -p                                                | embed-certs-20220601115855-16804        | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:59 PDT | 01 Jun 22 12:05 PDT |
	|         | embed-certs-20220601115855-16804                  |                                         |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |                |                     |                     |
	|         | --wait=true --embed-certs                         |                                         |         |                |                     |                     |
	|         | --driver=docker                                   |                                         |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                         |         |                |                     |                     |
	| ssh     | -p                                                | embed-certs-20220601115855-16804        | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:05 PDT | 01 Jun 22 12:05 PDT |
	|         | embed-certs-20220601115855-16804                  |                                         |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                         |         |                |                     |                     |
	| pause   | -p                                                | embed-certs-20220601115855-16804        | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:05 PDT | 01 Jun 22 12:05 PDT |
	|         | embed-certs-20220601115855-16804                  |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |                |                     |                     |
	| unpause | -p                                                | embed-certs-20220601115855-16804        | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:06 PDT |
	|         | embed-certs-20220601115855-16804                  |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |                |                     |                     |
	|---------|---------------------------------------------------|-----------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 11:59:59
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 11:59:59.653204   28829 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:59:59.653367   28829 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:59:59.653373   28829 out.go:309] Setting ErrFile to fd 2...
	I0601 11:59:59.653377   28829 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:59:59.653471   28829 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:59:59.653745   28829 out.go:303] Setting JSON to false
	I0601 11:59:59.668907   28829 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":8969,"bootTime":1654101030,"procs":354,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 11:59:59.669021   28829 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:59:59.692330   28829 out.go:177] * [embed-certs-20220601115855-16804] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 11:59:59.734931   28829 notify.go:193] Checking for updates...
	I0601 11:59:59.755632   28829 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:59:59.776895   28829 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:59:59.797902   28829 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 11:59:59.818891   28829 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:59:59.840237   28829 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:59:58.294591   28319 out.go:204]   - Booting up control plane ...
	I0601 11:59:59.862690   28829 config.go:178] Loaded profile config "embed-certs-20220601115855-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:59:59.863349   28829 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:59:59.936186   28829 docker.go:137] docker version: linux-20.10.14
	I0601 11:59:59.936326   28829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 12:00:00.071723   28829 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 19:00:00.020706131 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 12:00:00.115439   28829 out.go:177] * Using the docker driver based on existing profile
	I0601 12:00:00.136972   28829 start.go:284] selected driver: docker
	I0601 12:00:00.137021   28829 start.go:806] validating driver "docker" against &{Name:embed-certs-20220601115855-16804 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601115855-16804 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedu
ledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 12:00:00.137102   28829 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 12:00:00.139236   28829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 12:00:00.273893   28829 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 19:00:00.221448867 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 12:00:00.274092   28829 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 12:00:00.274108   28829 cni.go:95] Creating CNI manager for ""
	I0601 12:00:00.274119   28829 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 12:00:00.274130   28829 start_flags.go:306] config:
	{Name:embed-certs-20220601115855-16804 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601115855-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cl
uster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 12:00:00.317950   28829 out.go:177] * Starting control plane node embed-certs-20220601115855-16804 in cluster embed-certs-20220601115855-16804
	I0601 12:00:00.339702   28829 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 12:00:00.361619   28829 out.go:177] * Pulling base image ...
	I0601 12:00:00.403754   28829 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 12:00:00.403769   28829 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 12:00:00.403845   28829 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 12:00:00.403870   28829 cache.go:57] Caching tarball of preloaded images
	I0601 12:00:00.404060   28829 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 12:00:00.404081   28829 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 12:00:00.405150   28829 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601115855-16804/config.json ...
	I0601 12:00:00.473577   28829 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 12:00:00.473612   28829 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 12:00:00.473622   28829 cache.go:206] Successfully downloaded all kic artifacts
	I0601 12:00:00.473678   28829 start.go:352] acquiring machines lock for embed-certs-20220601115855-16804: {Name:mk196f5f4a80c33b64e542dea375820ba3ed670b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 12:00:00.473769   28829 start.go:356] acquired machines lock for "embed-certs-20220601115855-16804" in 61.526µs
	I0601 12:00:00.473799   28829 start.go:94] Skipping create...Using existing machine configuration
	I0601 12:00:00.473808   28829 fix.go:55] fixHost starting: 
	I0601 12:00:00.474098   28829 cli_runner.go:164] Run: docker container inspect embed-certs-20220601115855-16804 --format={{.State.Status}}
	I0601 12:00:00.546983   28829 fix.go:103] recreateIfNeeded on embed-certs-20220601115855-16804: state=Stopped err=<nil>
	W0601 12:00:00.547020   28829 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 12:00:00.590598   28829 out.go:177] * Restarting existing docker container for "embed-certs-20220601115855-16804" ...
	I0601 12:00:00.611803   28829 cli_runner.go:164] Run: docker start embed-certs-20220601115855-16804
	I0601 12:00:00.981301   28829 cli_runner.go:164] Run: docker container inspect embed-certs-20220601115855-16804 --format={{.State.Status}}
	I0601 12:00:01.057530   28829 kic.go:416] container "embed-certs-20220601115855-16804" state is running.
	I0601 12:00:01.058483   28829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220601115855-16804
	I0601 12:00:01.138894   28829 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601115855-16804/config.json ...
	I0601 12:00:01.139319   28829 machine.go:88] provisioning docker machine ...
	I0601 12:00:01.139343   28829 ubuntu.go:169] provisioning hostname "embed-certs-20220601115855-16804"
	I0601 12:00:01.139423   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:01.220339   28829 main.go:134] libmachine: Using SSH client type: native
	I0601 12:00:01.220539   28829 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 60747 <nil> <nil>}
	I0601 12:00:01.220567   28829 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220601115855-16804 && echo "embed-certs-20220601115855-16804" | sudo tee /etc/hostname
	I0601 12:00:01.352125   28829 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220601115855-16804
	
	I0601 12:00:01.352207   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:01.427439   28829 main.go:134] libmachine: Using SSH client type: native
	I0601 12:00:01.427585   28829 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 60747 <nil> <nil>}
	I0601 12:00:01.427600   28829 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220601115855-16804' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220601115855-16804/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220601115855-16804' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 12:00:01.544609   28829 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 12:00:01.544628   28829 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 12:00:01.544653   28829 ubuntu.go:177] setting up certificates
	I0601 12:00:01.544660   28829 provision.go:83] configureAuth start
	I0601 12:00:01.544721   28829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220601115855-16804
	I0601 12:00:01.621530   28829 provision.go:138] copyHostCerts
	I0601 12:00:01.621625   28829 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 12:00:01.621636   28829 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 12:00:01.621742   28829 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 12:00:01.621969   28829 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 12:00:01.621980   28829 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 12:00:01.622043   28829 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 12:00:01.622216   28829 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 12:00:01.622223   28829 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 12:00:01.622288   28829 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1675 bytes)
	I0601 12:00:01.622404   28829 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220601115855-16804 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220601115855-16804]
	I0601 12:00:01.850945   28829 provision.go:172] copyRemoteCerts
	I0601 12:00:01.851024   28829 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 12:00:01.851079   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:01.929859   28829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60747 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601115855-16804/id_rsa Username:docker}
	I0601 12:00:02.016851   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 12:00:02.037368   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 12:00:02.055389   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0601 12:00:02.077593   28829 provision.go:86] duration metric: configureAuth took 532.923535ms
	I0601 12:00:02.077613   28829 ubuntu.go:193] setting minikube options for container-runtime
	I0601 12:00:02.077867   28829 config.go:178] Loaded profile config "embed-certs-20220601115855-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 12:00:02.077925   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:02.152444   28829 main.go:134] libmachine: Using SSH client type: native
	I0601 12:00:02.152592   28829 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 60747 <nil> <nil>}
	I0601 12:00:02.152602   28829 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 12:00:02.272393   28829 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 12:00:02.272406   28829 ubuntu.go:71] root file system type: overlay
	I0601 12:00:02.272550   28829 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 12:00:02.272624   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:02.345039   28829 main.go:134] libmachine: Using SSH client type: native
	I0601 12:00:02.345239   28829 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 60747 <nil> <nil>}
	I0601 12:00:02.345322   28829 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 12:00:02.473536   28829 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 12:00:02.473632   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:02.547006   28829 main.go:134] libmachine: Using SSH client type: native
	I0601 12:00:02.547206   28829 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 60747 <nil> <nil>}
	I0601 12:00:02.547219   28829 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 12:00:02.668285   28829 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 12:00:02.668306   28829 machine.go:91] provisioned docker machine in 1.528998011s
	I0601 12:00:02.668317   28829 start.go:306] post-start starting for "embed-certs-20220601115855-16804" (driver="docker")
	I0601 12:00:02.668321   28829 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 12:00:02.668376   28829 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 12:00:02.668419   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:02.744308   28829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60747 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601115855-16804/id_rsa Username:docker}
	I0601 12:00:02.832162   28829 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 12:00:02.835671   28829 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 12:00:02.835684   28829 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 12:00:02.835691   28829 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 12:00:02.835696   28829 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 12:00:02.835704   28829 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 12:00:02.835822   28829 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 12:00:02.835969   28829 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem -> 168042.pem in /etc/ssl/certs
	I0601 12:00:02.836134   28829 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 12:00:02.843255   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /etc/ssl/certs/168042.pem (1708 bytes)
	I0601 12:00:02.861502   28829 start.go:309] post-start completed in 193.177974ms
	I0601 12:00:02.861575   28829 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 12:00:02.861682   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:02.936096   28829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60747 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601115855-16804/id_rsa Username:docker}
	I0601 12:00:03.020138   28829 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 12:00:03.024381   28829 fix.go:57] fixHost completed within 2.550601276s
	I0601 12:00:03.024393   28829 start.go:81] releasing machines lock for "embed-certs-20220601115855-16804", held for 2.550641205s
	I0601 12:00:03.024471   28829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220601115855-16804
	I0601 12:00:03.097794   28829 ssh_runner.go:195] Run: systemctl --version
	I0601 12:00:03.097795   28829 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 12:00:03.097869   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:03.097902   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:03.176095   28829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60747 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601115855-16804/id_rsa Username:docker}
	I0601 12:00:03.179173   28829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60747 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601115855-16804/id_rsa Username:docker}
	I0601 12:00:03.393941   28829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 12:00:03.405857   28829 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 12:00:03.415824   28829 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 12:00:03.415875   28829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 12:00:03.425026   28829 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 12:00:03.437823   28829 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 12:00:03.518418   28829 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 12:00:03.586389   28829 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 12:00:03.597266   28829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 12:00:03.669442   28829 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 12:00:03.679546   28829 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 12:00:03.715983   28829 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 12:00:03.793958   28829 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0601 12:00:03.794135   28829 cli_runner.go:164] Run: docker exec -t embed-certs-20220601115855-16804 dig +short host.docker.internal
	I0601 12:00:03.928920   28829 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 12:00:03.929017   28829 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 12:00:03.933477   28829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 12:00:03.943415   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:04.016419   28829 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 12:00:04.016501   28829 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 12:00:04.048821   28829 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0601 12:00:04.048836   28829 docker.go:541] Images already preloaded, skipping extraction
	I0601 12:00:04.048899   28829 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 12:00:04.079435   28829 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0601 12:00:04.079457   28829 cache_images.go:84] Images are preloaded, skipping loading
	I0601 12:00:04.079567   28829 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 12:00:04.154405   28829 cni.go:95] Creating CNI manager for ""
	I0601 12:00:04.154416   28829 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 12:00:04.154426   28829 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 12:00:04.154437   28829 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220601115855-16804 NodeName:embed-certs-20220601115855-16804 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:
/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 12:00:04.154550   28829 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "embed-certs-20220601115855-16804"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 12:00:04.154614   28829 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=embed-certs-20220601115855-16804 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601115855-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 12:00:04.154674   28829 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 12:00:04.162496   28829 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 12:00:04.162605   28829 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 12:00:04.169803   28829 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (358 bytes)
	I0601 12:00:04.182475   28829 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 12:00:04.196040   28829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2053 bytes)
	I0601 12:00:04.210349   28829 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0601 12:00:04.214249   28829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 12:00:04.224887   28829 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601115855-16804 for IP: 192.168.58.2
	I0601 12:00:04.225006   28829 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 12:00:04.225070   28829 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 12:00:04.225156   28829 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601115855-16804/client.key
	I0601 12:00:04.225217   28829 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601115855-16804/apiserver.key.cee25041
	I0601 12:00:04.225268   28829 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601115855-16804/proxy-client.key
	I0601 12:00:04.225483   28829 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem (1338 bytes)
	W0601 12:00:04.225526   28829 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804_empty.pem, impossibly tiny 0 bytes
	I0601 12:00:04.225542   28829 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1675 bytes)
	I0601 12:00:04.225573   28829 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 12:00:04.225606   28829 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 12:00:04.225635   28829 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1675 bytes)
	I0601 12:00:04.225702   28829 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem (1708 bytes)
	I0601 12:00:04.226272   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601115855-16804/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 12:00:04.245065   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601115855-16804/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 12:00:04.264844   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601115855-16804/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 12:00:04.283813   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601115855-16804/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0601 12:00:04.302400   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 12:00:04.320094   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 12:00:04.337340   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 12:00:04.355164   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0601 12:00:04.372566   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 12:00:04.390758   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem --> /usr/share/ca-certificates/16804.pem (1338 bytes)
	I0601 12:00:04.407937   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /usr/share/ca-certificates/168042.pem (1708 bytes)
	I0601 12:00:04.425147   28829 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 12:00:04.438402   28829 ssh_runner.go:195] Run: openssl version
	I0601 12:00:04.444064   28829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 12:00:04.452131   28829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 12:00:04.456181   28829 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0601 12:00:04.456224   28829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 12:00:04.461511   28829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 12:00:04.468902   28829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16804.pem && ln -fs /usr/share/ca-certificates/16804.pem /etc/ssl/certs/16804.pem"
	I0601 12:00:04.476746   28829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16804.pem
	I0601 12:00:04.480878   28829 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 18:01 /usr/share/ca-certificates/16804.pem
	I0601 12:00:04.480926   28829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16804.pem
	I0601 12:00:04.486478   28829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16804.pem /etc/ssl/certs/51391683.0"
	I0601 12:00:04.493830   28829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168042.pem && ln -fs /usr/share/ca-certificates/168042.pem /etc/ssl/certs/168042.pem"
	I0601 12:00:04.501614   28829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168042.pem
	I0601 12:00:04.505599   28829 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 18:01 /usr/share/ca-certificates/168042.pem
	I0601 12:00:04.505640   28829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168042.pem
	I0601 12:00:04.511112   28829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168042.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 12:00:04.518272   28829 kubeadm.go:395] StartCluster: {Name:embed-certs-20220601115855-16804 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601115855-16804 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expose
dPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 12:00:04.518372   28829 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 12:00:04.546843   28829 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 12:00:04.554437   28829 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 12:00:04.554453   28829 kubeadm.go:626] restartCluster start
	I0601 12:00:04.554494   28829 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 12:00:04.561477   28829 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:04.561586   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:04.636533   28829 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220601115855-16804" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 12:00:04.636800   28829 kubeconfig.go:127] "embed-certs-20220601115855-16804" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 12:00:04.637127   28829 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk924f4ba24fa74a0cb052299e0cc4e825b209a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 12:00:04.638462   28829 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 12:00:04.646150   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:04.646199   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:04.654404   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:04.877249   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:04.877380   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:04.888485   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:05.077954   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:05.078102   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:05.090777   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:05.277957   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:05.278185   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:05.288604   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:05.476559   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:05.476656   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:05.488394   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:05.677991   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:05.678216   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:05.689348   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:05.876473   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:05.876581   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:05.887319   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:06.078192   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:06.078404   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:06.088967   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:06.275971   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:06.276084   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:06.286277   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:06.476564   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:06.476653   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:06.487710   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:06.677961   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:06.678149   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:06.688765   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:06.878002   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:06.878195   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:06.888550   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:07.075946   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:07.076132   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:07.087777   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:07.276117   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:07.276185   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:07.284689   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:07.477252   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:07.477434   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:07.488159   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:07.677742   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:07.677844   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:07.688190   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:07.688199   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:07.688241   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:07.696107   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:07.696118   28829 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 12:00:07.696125   28829 kubeadm.go:1092] stopping kube-system containers ...
	I0601 12:00:07.696181   28829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 12:00:07.726742   28829 docker.go:442] Stopping containers: [54f727789abd 1a421477b475 d34c5263066b 4b5d8c649cd9 54ff8c39a3a3 d7c01b3e7bd3 aff02a265852 26c16b34697b 61e2850c4dc2 5c57a813ff5a f842c60a2bc5 e84f942430d3 8fa7e200ea41 d699653d0b64 0338f069b9af 8ea64f1a925b]
	I0601 12:00:07.726812   28829 ssh_runner.go:195] Run: docker stop 54f727789abd 1a421477b475 d34c5263066b 4b5d8c649cd9 54ff8c39a3a3 d7c01b3e7bd3 aff02a265852 26c16b34697b 61e2850c4dc2 5c57a813ff5a f842c60a2bc5 e84f942430d3 8fa7e200ea41 d699653d0b64 0338f069b9af 8ea64f1a925b
	I0601 12:00:07.758600   28829 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 12:00:07.769183   28829 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 12:00:07.777276   28829 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun  1 18:59 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jun  1 18:59 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Jun  1 18:59 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jun  1 18:59 /etc/kubernetes/scheduler.conf
	
	I0601 12:00:07.777325   28829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0601 12:00:07.785105   28829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0601 12:00:07.792774   28829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0601 12:00:07.800094   28829 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:07.800141   28829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0601 12:00:07.806961   28829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0601 12:00:07.814145   28829 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:07.814256   28829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0601 12:00:07.821393   28829 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 12:00:07.829055   28829 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 12:00:07.829066   28829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:00:07.875534   28829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:00:08.943797   28829 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.068258535s)
	I0601 12:00:08.943827   28829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:00:09.070381   28829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:00:09.117719   28829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:00:09.164707   28829 api_server.go:51] waiting for apiserver process to appear ...
	I0601 12:00:09.164770   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:00:09.676929   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:00:10.174847   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:00:10.675184   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:00:10.725618   28829 api_server.go:71] duration metric: took 1.560936283s to wait for apiserver process to appear ...
	I0601 12:00:10.725639   28829 api_server.go:87] waiting for apiserver healthz status ...
	I0601 12:00:10.725650   28829 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60746/healthz ...
	I0601 12:00:13.229293   28829 api_server.go:266] https://127.0.0.1:60746/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0601 12:00:13.229314   28829 api_server.go:102] status: https://127.0.0.1:60746/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0601 12:00:13.731491   28829 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60746/healthz ...
	I0601 12:00:13.739444   28829 api_server.go:266] https://127.0.0.1:60746/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 12:00:13.739457   28829 api_server.go:102] status: https://127.0.0.1:60746/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 12:00:14.229657   28829 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60746/healthz ...
	I0601 12:00:14.235866   28829 api_server.go:266] https://127.0.0.1:60746/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 12:00:14.235887   28829 api_server.go:102] status: https://127.0.0.1:60746/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 12:00:14.729449   28829 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60746/healthz ...
	I0601 12:00:14.735550   28829 api_server.go:266] https://127.0.0.1:60746/healthz returned 200:
	ok
	I0601 12:00:14.742074   28829 api_server.go:140] control plane version: v1.23.6
	I0601 12:00:14.742087   28829 api_server.go:130] duration metric: took 4.016491291s to wait for apiserver health ...
	I0601 12:00:14.742094   28829 cni.go:95] Creating CNI manager for ""
	I0601 12:00:14.742105   28829 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 12:00:14.742117   28829 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 12:00:14.749795   28829 system_pods.go:59] 8 kube-system pods found
	I0601 12:00:14.749812   28829 system_pods.go:61] "coredns-64897985d-hxbhf" [b1b3b467-12fe-4681-9a86-2855ba1e087a] Running
	I0601 12:00:14.749819   28829 system_pods.go:61] "etcd-embed-certs-20220601115855-16804" [9bdd83e2-edc8-4fd6-913e-c978b2a390a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0601 12:00:14.749823   28829 system_pods.go:61] "kube-apiserver-embed-certs-20220601115855-16804" [f01aa1c0-7c66-485f-8ae9-ea81ec72d61f] Running
	I0601 12:00:14.749830   28829 system_pods.go:61] "kube-controller-manager-embed-certs-20220601115855-16804" [4b44afb1-a477-4b52-af8c-9fbf9947dcc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0601 12:00:14.749836   28829 system_pods.go:61] "kube-proxy-hhbwv" [19408c1b-0db7-4ce4-bda8-b9ef78054eb5] Running
	I0601 12:00:14.749840   28829 system_pods.go:61] "kube-scheduler-embed-certs-20220601115855-16804" [1e8cf785-92e1-4068-add7-d217ee3fd625] Running
	I0601 12:00:14.749845   28829 system_pods.go:61] "metrics-server-b955d9d8-cv5b4" [8e155e5b-8d5c-4898-a95f-4d24d1c85714] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 12:00:14.749849   28829 system_pods.go:61] "storage-provisioner" [a3a21a47-4019-4f29-ac55-23ca85609de6] Running
	I0601 12:00:14.749853   28829 system_pods.go:74] duration metric: took 7.73298ms to wait for pod list to return data ...
	I0601 12:00:14.749859   28829 node_conditions.go:102] verifying NodePressure condition ...
	I0601 12:00:14.753342   28829 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 12:00:14.753360   28829 node_conditions.go:123] node cpu capacity is 6
	I0601 12:00:14.753372   28829 node_conditions.go:105] duration metric: took 3.509003ms to run NodePressure ...
	I0601 12:00:14.753387   28829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:00:14.902276   28829 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0601 12:00:14.908459   28829 kubeadm.go:777] kubelet initialised
	I0601 12:00:14.908471   28829 kubeadm.go:778] duration metric: took 6.181ms waiting for restarted kubelet to initialise ...
	I0601 12:00:14.908479   28829 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 12:00:14.914477   28829 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-hxbhf" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:14.919226   28829 pod_ready.go:92] pod "coredns-64897985d-hxbhf" in "kube-system" namespace has status "Ready":"True"
	I0601 12:00:14.919234   28829 pod_ready.go:81] duration metric: took 4.746053ms waiting for pod "coredns-64897985d-hxbhf" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:14.919239   28829 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:16.930345   28829 pod_ready.go:102] pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:18.930602   28829 pod_ready.go:102] pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:20.931370   28829 pod_ready.go:102] pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:23.429560   28829 pod_ready.go:102] pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:25.431054   28829 pod_ready.go:102] pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:27.431111   28829 pod_ready.go:102] pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:29.432632   28829 pod_ready.go:102] pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:29.930254   28829 pod_ready.go:92] pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:00:29.930266   28829 pod_ready.go:81] duration metric: took 15.011203247s waiting for pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:29.930272   28829 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:29.934493   28829 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:00:29.934501   28829 pod_ready.go:81] duration metric: took 4.223819ms waiting for pod "kube-apiserver-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:29.934506   28829 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:29.939831   28829 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:00:29.939839   28829 pod_ready.go:81] duration metric: took 5.322445ms waiting for pod "kube-controller-manager-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:29.939845   28829 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hhbwv" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:29.944936   28829 pod_ready.go:92] pod "kube-proxy-hhbwv" in "kube-system" namespace has status "Ready":"True"
	I0601 12:00:29.944945   28829 pod_ready.go:81] duration metric: took 5.09599ms waiting for pod "kube-proxy-hhbwv" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:29.944951   28829 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:29.950311   28829 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:00:29.950320   28829 pod_ready.go:81] duration metric: took 5.363535ms waiting for pod "kube-scheduler-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:29.950326   28829 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:32.337276   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:34.338997   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:36.838194   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:39.339010   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:41.837043   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:43.839697   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:46.337938   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:48.338698   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:50.837208   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:53.336924   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:55.337759   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:57.837371   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:59.838487   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:02.338943   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:04.839121   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:07.336527   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:09.835809   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:11.837079   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:13.838677   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:16.336928   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:18.837052   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:20.838148   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:23.335490   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:25.336728   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:27.839348   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:30.337601   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:32.838908   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:35.337845   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:37.836046   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:39.836118   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:41.836308   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:43.838508   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:46.338445   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:48.838271   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:50.838560   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:53.335328   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:53.209412   28319 kubeadm.go:397] StartCluster complete in 7m58.682761983s
	I0601 12:01:53.209495   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 12:01:53.239013   28319 logs.go:274] 0 containers: []
	W0601 12:01:53.239025   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 12:01:53.239081   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 12:01:53.268562   28319 logs.go:274] 0 containers: []
	W0601 12:01:53.268573   28319 logs.go:276] No container was found matching "etcd"
	I0601 12:01:53.268647   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 12:01:53.300274   28319 logs.go:274] 0 containers: []
	W0601 12:01:53.300286   28319 logs.go:276] No container was found matching "coredns"
	I0601 12:01:53.300359   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 12:01:53.329677   28319 logs.go:274] 0 containers: []
	W0601 12:01:53.329689   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 12:01:53.329746   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 12:01:53.361469   28319 logs.go:274] 0 containers: []
	W0601 12:01:53.361481   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 12:01:53.361536   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 12:01:53.391374   28319 logs.go:274] 0 containers: []
	W0601 12:01:53.391386   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 12:01:53.391442   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 12:01:53.419646   28319 logs.go:274] 0 containers: []
	W0601 12:01:53.419659   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 12:01:53.419718   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 12:01:53.450297   28319 logs.go:274] 0 containers: []
	W0601 12:01:53.450310   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 12:01:53.450317   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 12:01:53.450324   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 12:01:53.493726   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 12:01:53.493744   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 12:01:53.506201   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 12:01:53.506214   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 12:01:53.559752   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 12:01:53.559763   28319 logs.go:123] Gathering logs for Docker ...
	I0601 12:01:53.559771   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 12:01:53.572451   28319 logs.go:123] Gathering logs for container status ...
	I0601 12:01:53.572466   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 12:01:55.624682   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052227376s)
	W0601 12:01:55.624796   28319 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0601 12:01:55.624810   28319 out.go:239] * 
	W0601 12:01:55.624940   28319 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0601 12:01:55.624954   28319 out.go:239] * 
	W0601 12:01:55.625525   28319 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 12:01:55.688737   28319 out.go:177] 
	W0601 12:01:55.731070   28319 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0601 12:01:55.731219   28319 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0601 12:01:55.731329   28319 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0601 12:01:55.794921   28319 out.go:177] 
	I0601 12:01:55.836076   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:58.336447   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:00.838455   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:03.335336   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:05.336325   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:07.838513   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:10.337754   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:12.838489   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:15.335382   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:17.837535   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:20.334412   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:22.334794   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:24.836980   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:26.837851   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:29.334703   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:31.836821   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:34.335821   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:36.355932   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:38.836152   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:41.338268   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:43.834144   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:45.838347   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:48.334485   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:50.335178   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:52.336039   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:54.835277   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:56.845623   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:59.335323   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:01.335991   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:03.835867   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:05.836222   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:08.336341   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:10.337348   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:12.837078   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:14.837161   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:17.337300   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:19.833964   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:21.834609   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:23.837358   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:26.335932   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:28.833759   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:30.836473   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:32.836486   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:35.337161   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:37.834111   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:39.834932   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:41.835885   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:44.334515   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:46.334562   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:48.836033   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:51.333781   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:53.336702   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:55.833470   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:57.836511   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:59.837021   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:04:02.335801   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:04:04.833600   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:04:06.837271   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:04:09.333347   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:04:11.334780   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:04:13.336669   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:04:15.834388   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:04:17.836752   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:04:20.336587   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:04:22.833053   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:04:24.835021   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:04:27.334160   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:04:29.834266   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:04:30.328306   28829 pod_ready.go:81] duration metric: took 4m0.380859693s waiting for pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace to be "Ready" ...
	E0601 12:04:30.328371   28829 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0601 12:04:30.328384   28829 pod_ready.go:38] duration metric: took 4m15.422969154s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 12:04:30.328408   28829 kubeadm.go:630] restartCluster took 4m25.777145349s
	W0601 12:04:30.328486   28829 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0601 12:04:30.328501   28829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 12:05:08.804815   28829 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (38.476762521s)
	I0601 12:05:08.804876   28829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 12:05:08.815268   28829 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 12:05:08.823153   28829 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 12:05:08.823230   28829 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 12:05:08.830907   28829 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 12:05:08.830934   28829 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 12:05:09.330682   28829 out.go:204]   - Generating certificates and keys ...
	I0601 12:05:09.942397   28829 out.go:204]   - Booting up control plane ...
	I0601 12:05:16.496487   28829 out.go:204]   - Configuring RBAC rules ...
	I0601 12:05:16.872838   28829 cni.go:95] Creating CNI manager for ""
	I0601 12:05:16.872853   28829 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 12:05:16.872875   28829 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 12:05:16.872963   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=af273d6c1d2efba123f39c341ef4e1b2746b42f1 minikube.k8s.io/name=embed-certs-20220601115855-16804 minikube.k8s.io/updated_at=2022_06_01T12_05_16_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:16.872968   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:16.892760   28829 ops.go:34] apiserver oom_adj: -16
	I0601 12:05:17.095618   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:17.711843   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:18.212167   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:18.711996   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:19.211974   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:19.711974   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:20.211968   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:20.711877   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:21.211868   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:21.711939   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:22.211919   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:22.711862   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:23.211803   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:23.711916   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:24.211809   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:24.711811   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:25.211781   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:25.711871   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:26.211933   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:26.711893   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:27.211878   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:27.711875   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:28.211818   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:28.711786   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:29.211741   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:29.268731   28829 kubeadm.go:1045] duration metric: took 12.395967112s to wait for elevateKubeSystemPrivileges.
	I0601 12:05:29.268748   28829 kubeadm.go:397] StartCluster complete in 5m24.754388961s
	I0601 12:05:29.268775   28829 settings.go:142] acquiring lock: {Name:mk630944d7da2d6f5ad8bc7bd2a815aad6529f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 12:05:29.268868   28829 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 12:05:29.269672   28829 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk924f4ba24fa74a0cb052299e0cc4e825b209a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 12:05:29.783919   28829 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220601115855-16804" rescaled to 1
	I0601 12:05:29.783953   28829 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 12:05:29.783975   28829 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 12:05:29.783981   28829 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0601 12:05:29.804692   28829 addons.go:65] Setting dashboard=true in profile "embed-certs-20220601115855-16804"
	I0601 12:05:29.804694   28829 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220601115855-16804"
	I0601 12:05:29.804695   28829 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220601115855-16804"
	I0601 12:05:29.784099   28829 config.go:178] Loaded profile config "embed-certs-20220601115855-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 12:05:29.804706   28829 addons.go:153] Setting addon dashboard=true in "embed-certs-20220601115855-16804"
	I0601 12:05:29.804706   28829 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220601115855-16804"
	I0601 12:05:29.804711   28829 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220601115855-16804"
	I0601 12:05:29.804715   28829 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220601115855-16804"
	W0601 12:05:29.804714   28829 addons.go:165] addon dashboard should already be in state true
	W0601 12:05:29.804724   28829 addons.go:165] addon storage-provisioner should already be in state true
	I0601 12:05:29.804723   28829 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220601115855-16804"
	W0601 12:05:29.804734   28829 addons.go:165] addon metrics-server should already be in state true
	I0601 12:05:29.804610   28829 out.go:177] * Verifying Kubernetes components...
	I0601 12:05:29.804760   28829 host.go:66] Checking if "embed-certs-20220601115855-16804" exists ...
	I0601 12:05:29.804762   28829 host.go:66] Checking if "embed-certs-20220601115855-16804" exists ...
	I0601 12:05:29.804763   28829 host.go:66] Checking if "embed-certs-20220601115855-16804" exists ...
	I0601 12:05:29.862725   28829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 12:05:29.805011   28829 cli_runner.go:164] Run: docker container inspect embed-certs-20220601115855-16804 --format={{.State.Status}}
	I0601 12:05:29.805135   28829 cli_runner.go:164] Run: docker container inspect embed-certs-20220601115855-16804 --format={{.State.Status}}
	I0601 12:05:29.865559   28829 cli_runner.go:164] Run: docker container inspect embed-certs-20220601115855-16804 --format={{.State.Status}}
	I0601 12:05:29.866658   28829 cli_runner.go:164] Run: docker container inspect embed-certs-20220601115855-16804 --format={{.State.Status}}
	I0601 12:05:29.881842   28829 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 12:05:29.903600   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:05:29.996535   28829 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220601115855-16804"
	I0601 12:05:30.052547   28829 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 12:05:30.032695   28829 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	W0601 12:05:30.052583   28829 addons.go:165] addon default-storageclass should already be in state true
	I0601 12:05:30.073765   28829 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 12:05:30.111524   28829 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0601 12:05:30.111603   28829 host.go:66] Checking if "embed-certs-20220601115855-16804" exists ...
	I0601 12:05:30.128372   28829 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220601115855-16804" to be "Ready" ...
	I0601 12:05:30.148423   28829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 12:05:30.148483   28829 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 12:05:30.185592   28829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 12:05:30.149018   28829 cli_runner.go:164] Run: docker container inspect embed-certs-20220601115855-16804 --format={{.State.Status}}
	I0601 12:05:30.185649   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:05:30.185663   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:05:30.189784   28829 node_ready.go:49] node "embed-certs-20220601115855-16804" has status "Ready":"True"
	I0601 12:05:30.206670   28829 node_ready.go:38] duration metric: took 58.256695ms waiting for node "embed-certs-20220601115855-16804" to be "Ready" ...
	I0601 12:05:30.206553   28829 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 12:05:30.206695   28829 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 12:05:30.227763   28829 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 12:05:30.227781   28829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 12:05:30.227875   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:05:30.242759   28829 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-d4qsr" in "kube-system" namespace to be "Ready" ...
	I0601 12:05:30.287225   28829 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 12:05:30.287239   28829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 12:05:30.287310   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:05:30.323283   28829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60747 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601115855-16804/id_rsa Username:docker}
	I0601 12:05:30.327102   28829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60747 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601115855-16804/id_rsa Username:docker}
	I0601 12:05:30.342566   28829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60747 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601115855-16804/id_rsa Username:docker}
	I0601 12:05:30.383248   28829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60747 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601115855-16804/id_rsa Username:docker}
	I0601 12:05:30.503711   28829 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 12:05:30.503751   28829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 12:05:30.516741   28829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 12:05:30.600152   28829 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 12:05:30.600169   28829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 12:05:30.602079   28829 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 12:05:30.602090   28829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 12:05:30.617625   28829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 12:05:30.692085   28829 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 12:05:30.692104   28829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 12:05:30.706290   28829 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 12:05:30.706317   28829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 12:05:30.798560   28829 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 12:05:30.798574   28829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 12:05:30.809813   28829 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 12:05:30.809843   28829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 12:05:30.887234   28829 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 12:05:30.887249   28829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 12:05:30.891885   28829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 12:05:30.910055   28829 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 12:05:30.910073   28829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 12:05:30.999281   28829 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 12:05:30.999339   28829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 12:05:31.087293   28829 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 12:05:31.087310   28829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 12:05:31.115353   28829 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 12:05:31.115368   28829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 12:05:31.198051   28829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 12:05:31.605430   28829 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.723571406s)
	I0601 12:05:31.605450   28829 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0601 12:05:31.988501   28829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.471752451s)
	I0601 12:05:31.988561   28829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.370919355s)
	I0601 12:05:32.095906   28829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.204010693s)
	I0601 12:05:32.095940   28829 addons.go:386] Verifying addon metrics-server=true in "embed-certs-20220601115855-16804"
	I0601 12:05:32.307621   28829 pod_ready.go:102] pod "coredns-64897985d-d4qsr" in "kube-system" namespace has status "Ready":"False"
	I0601 12:05:32.412245   28829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.214183058s)
	I0601 12:05:32.489169   28829 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0601 12:05:32.510111   28829 addons.go:417] enableAddons completed in 2.726142639s
	I0601 12:05:34.791205   28829 pod_ready.go:102] pod "coredns-64897985d-d4qsr" in "kube-system" namespace has status "Ready":"False"
	I0601 12:05:35.790741   28829 pod_ready.go:92] pod "coredns-64897985d-d4qsr" in "kube-system" namespace has status "Ready":"True"
	I0601 12:05:35.790756   28829 pod_ready.go:81] duration metric: took 5.548037364s waiting for pod "coredns-64897985d-d4qsr" in "kube-system" namespace to be "Ready" ...
	I0601 12:05:35.790764   28829 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-lflsj" in "kube-system" namespace to be "Ready" ...
	I0601 12:05:35.796990   28829 pod_ready.go:92] pod "coredns-64897985d-lflsj" in "kube-system" namespace has status "Ready":"True"
	I0601 12:05:35.797000   28829 pod_ready.go:81] duration metric: took 6.223912ms waiting for pod "coredns-64897985d-lflsj" in "kube-system" namespace to be "Ready" ...
	I0601 12:05:35.797007   28829 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:05:35.801815   28829 pod_ready.go:92] pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:05:35.801825   28829 pod_ready.go:81] duration metric: took 4.81318ms waiting for pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:05:35.801839   28829 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:05:35.806567   28829 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:05:35.806577   28829 pod_ready.go:81] duration metric: took 4.727671ms waiting for pod "kube-apiserver-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:05:35.806584   28829 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:05:35.812087   28829 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:05:35.812104   28829 pod_ready.go:81] duration metric: took 5.511915ms waiting for pod "kube-controller-manager-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:05:35.812121   28829 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8pt2q" in "kube-system" namespace to be "Ready" ...
	I0601 12:05:36.188181   28829 pod_ready.go:92] pod "kube-proxy-8pt2q" in "kube-system" namespace has status "Ready":"True"
	I0601 12:05:36.188191   28829 pod_ready.go:81] duration metric: took 376.062763ms waiting for pod "kube-proxy-8pt2q" in "kube-system" namespace to be "Ready" ...
	I0601 12:05:36.188198   28829 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:05:36.588612   28829 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:05:36.588642   28829 pod_ready.go:81] duration metric: took 400.444499ms waiting for pod "kube-scheduler-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:05:36.588648   28829 pod_ready.go:38] duration metric: took 6.36129347s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 12:05:36.588666   28829 api_server.go:51] waiting for apiserver process to appear ...
	I0601 12:05:36.588714   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:05:36.600807   28829 api_server.go:71] duration metric: took 6.816911881s to wait for apiserver process to appear ...
	I0601 12:05:36.600821   28829 api_server.go:87] waiting for apiserver healthz status ...
	I0601 12:05:36.600835   28829 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60746/healthz ...
	I0601 12:05:36.607455   28829 api_server.go:266] https://127.0.0.1:60746/healthz returned 200:
	ok
	I0601 12:05:36.608556   28829 api_server.go:140] control plane version: v1.23.6
	I0601 12:05:36.608565   28829 api_server.go:130] duration metric: took 7.7397ms to wait for apiserver health ...
	I0601 12:05:36.608570   28829 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 12:05:36.792965   28829 system_pods.go:59] 9 kube-system pods found
	I0601 12:05:36.792980   28829 system_pods.go:61] "coredns-64897985d-d4qsr" [85fd00ad-b978-455f-b46f-3abba8272140] Running
	I0601 12:05:36.792983   28829 system_pods.go:61] "coredns-64897985d-lflsj" [3cc3702d-8d03-4400-951b-ad67ad94d3dc] Running
	I0601 12:05:36.792988   28829 system_pods.go:61] "etcd-embed-certs-20220601115855-16804" [28835ec6-e6b7-44fc-b6d8-0c9828e4bbc5] Running
	I0601 12:05:36.792999   28829 system_pods.go:61] "kube-apiserver-embed-certs-20220601115855-16804" [0a9f4bf1-fba4-403b-a490-d07f9eb64a93] Running
	I0601 12:05:36.793003   28829 system_pods.go:61] "kube-controller-manager-embed-certs-20220601115855-16804" [0b5a87bc-f75b-4497-9c7c-74317a55b16e] Running
	I0601 12:05:36.793008   28829 system_pods.go:61] "kube-proxy-8pt2q" [fc613f9b-8ed7-4c30-8a2e-aef8e9c601cb] Running
	I0601 12:05:36.793013   28829 system_pods.go:61] "kube-scheduler-embed-certs-20220601115855-16804" [b3e9d427-fac9-4a1f-b158-d7b04cd8f4e4] Running
	I0601 12:05:36.793019   28829 system_pods.go:61] "metrics-server-b955d9d8-fnr2z" [aaaea80d-aee2-4f43-8ffe-e70aa5fe0b2f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 12:05:36.793024   28829 system_pods.go:61] "storage-provisioner" [f9e71c1d-677f-4843-9310-a42068423370] Running
	I0601 12:05:36.793028   28829 system_pods.go:74] duration metric: took 184.456154ms to wait for pod list to return data ...
	I0601 12:05:36.793034   28829 default_sa.go:34] waiting for default service account to be created ...
	I0601 12:05:36.988778   28829 default_sa.go:45] found service account: "default"
	I0601 12:05:36.988790   28829 default_sa.go:55] duration metric: took 195.754132ms for default service account to be created ...
	I0601 12:05:36.988796   28829 system_pods.go:116] waiting for k8s-apps to be running ...
	I0601 12:05:37.192496   28829 system_pods.go:86] 9 kube-system pods found
	I0601 12:05:37.192511   28829 system_pods.go:89] "coredns-64897985d-d4qsr" [85fd00ad-b978-455f-b46f-3abba8272140] Running
	I0601 12:05:37.192515   28829 system_pods.go:89] "coredns-64897985d-lflsj" [3cc3702d-8d03-4400-951b-ad67ad94d3dc] Running
	I0601 12:05:37.192519   28829 system_pods.go:89] "etcd-embed-certs-20220601115855-16804" [28835ec6-e6b7-44fc-b6d8-0c9828e4bbc5] Running
	I0601 12:05:37.192530   28829 system_pods.go:89] "kube-apiserver-embed-certs-20220601115855-16804" [0a9f4bf1-fba4-403b-a490-d07f9eb64a93] Running
	I0601 12:05:37.192535   28829 system_pods.go:89] "kube-controller-manager-embed-certs-20220601115855-16804" [0b5a87bc-f75b-4497-9c7c-74317a55b16e] Running
	I0601 12:05:37.192538   28829 system_pods.go:89] "kube-proxy-8pt2q" [fc613f9b-8ed7-4c30-8a2e-aef8e9c601cb] Running
	I0601 12:05:37.192543   28829 system_pods.go:89] "kube-scheduler-embed-certs-20220601115855-16804" [b3e9d427-fac9-4a1f-b158-d7b04cd8f4e4] Running
	I0601 12:05:37.192550   28829 system_pods.go:89] "metrics-server-b955d9d8-fnr2z" [aaaea80d-aee2-4f43-8ffe-e70aa5fe0b2f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 12:05:37.192554   28829 system_pods.go:89] "storage-provisioner" [f9e71c1d-677f-4843-9310-a42068423370] Running
	I0601 12:05:37.192559   28829 system_pods.go:126] duration metric: took 203.762371ms to wait for k8s-apps to be running ...
	I0601 12:05:37.192567   28829 system_svc.go:44] waiting for kubelet service to be running ....
	I0601 12:05:37.192623   28829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 12:05:37.208966   28829 system_svc.go:56] duration metric: took 16.397228ms WaitForService to wait for kubelet.
	I0601 12:05:37.208983   28829 kubeadm.go:572] duration metric: took 7.425101117s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0601 12:05:37.209003   28829 node_conditions.go:102] verifying NodePressure condition ...
	I0601 12:05:37.397658   28829 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 12:05:37.397674   28829 node_conditions.go:123] node cpu capacity is 6
	I0601 12:05:37.397686   28829 node_conditions.go:105] duration metric: took 188.678974ms to run NodePressure ...
	I0601 12:05:37.397695   28829 start.go:213] waiting for startup goroutines ...
	I0601 12:05:37.436249   28829 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0601 12:05:37.458438   28829 out.go:177] * Done! kubectl is now configured to use "embed-certs-20220601115855-16804" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-01 19:00:01 UTC, end at Wed 2022-06-01 19:06:29 UTC. --
	Jun 01 19:04:46 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:04:46.768042372Z" level=info msg="ignoring event" container=71ccdc78a1c6e69359691496410092a06a7c3fd4c37ffc2a5f6c4a413cbe91ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:04:46 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:04:46.880198446Z" level=info msg="ignoring event" container=a7d5d1d6cd18db063d1c2d7fc7991375e8dc8154935b0e0e76ac96ab9ba88c04 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:04:57 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:04:57.028946054Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=0e09071f230461fc8c60629795e64043c9d84018cc9a80be9294d4552e0c52c3
	Jun 01 19:04:57 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:04:57.058873993Z" level=info msg="ignoring event" container=0e09071f230461fc8c60629795e64043c9d84018cc9a80be9294d4552e0c52c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:04:57 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:04:57.174821542Z" level=info msg="ignoring event" container=0aad5ca0a394cf6461c23f14fe591e66b89a7e79a4b465ab0eeb1e5f3efa0898 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:05:07 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:07.243462171Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=c63a9420de02a1022c759b4caf98b142bbfb581f986dde7e4ac807c9aaaa4403
	Jun 01 19:05:07 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:07.298602462Z" level=info msg="ignoring event" container=c63a9420de02a1022c759b4caf98b142bbfb581f986dde7e4ac807c9aaaa4403 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:05:07 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:07.424678894Z" level=info msg="ignoring event" container=1b220fad43e1f5d85ff364980ed68edc766d714aeebc4982e79078ca74493ee8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:05:07 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:07.527325630Z" level=info msg="ignoring event" container=e3f4c96e3f9800c226e9f9aa6c062d24993f65c6c3f4272d8aa2276514c463fc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:05:07 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:07.628775935Z" level=info msg="ignoring event" container=b739864d288ecbde8b0b1e93246a6d20856b0ac9233472ec6bf4de3ef3b43e33 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:05:07 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:07.734389527Z" level=info msg="ignoring event" container=0110a1f81dcf8a5b18e4e20a574cc456a211c4ea74695c5236b0b3d3b4e3913a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:05:07 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:07.846017552Z" level=info msg="ignoring event" container=4af3a998b43b14c55b04591304de664d354309a29431e35de379467fd5eab9b6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:05:32 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:32.692857665Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 19:05:32 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:32.692938410Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 19:05:32 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:32.695357867Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 19:05:33 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:33.327247619Z" level=warning msg="reference for unknown type: " digest="sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2" remote="docker.io/kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2"
	Jun 01 19:05:38 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:38.088611457Z" level=info msg="ignoring event" container=56c8192225b0e9ced84e900e9c502343adf80b27c63d949c62dc5a73f1abf747 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:05:38 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:38.343116149Z" level=info msg="ignoring event" container=5932bcb6e9cf366b8149d7374f2be62bfcd0c9eb75134b18788050006d423ecf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:05:38 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:38.866356767Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jun 01 19:05:39 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:39.092699177Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jun 01 19:05:42 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:42.443264471Z" level=info msg="ignoring event" container=e850ccaa1099d442cbfe06579668f7fc245f3a13b858c88427d8a04116d6a64e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:05:43 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:43.262087068Z" level=info msg="ignoring event" container=597493054b80ac3015a530b4cdc9e41f26c7d37ad16eec9806b85bd595acf49d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:05:46 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:46.194170582Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 19:05:46 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:46.194210761Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 19:05:46 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:46.195628514Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	597493054b80a       a90209bb39e3d                                                                                    46 seconds ago       Exited              dashboard-metrics-scraper   1                   660c0600baea9
	40a3e61ea1d7c       kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2   51 seconds ago       Running             kubernetes-dashboard        0                   850a668ad00aa
	415c0adaff0b7       6e38f40d628db                                                                                    57 seconds ago       Running             storage-provisioner         0                   b7310893337b6
	5c7406ea20fc3       a4ca41631cc7a                                                                                    58 seconds ago       Running             coredns                     0                   f9cae47eb2f6d
	218d974cd4bd7       4c03754524064                                                                                    59 seconds ago       Running             kube-proxy                  0                   8a094407f9052
	8eccce42310b9       595f327f224a4                                                                                    About a minute ago   Running             kube-scheduler              2                   37c3ce776f22e
	001882f735bb7       8fa62c12256df                                                                                    About a minute ago   Running             kube-apiserver              2                   9ff8f749dc98c
	6d8d06f0c3f76       df7b72818ad2e                                                                                    About a minute ago   Running             kube-controller-manager     2                   f34e81bac3033
	0c9e2554bdfe8       25f8c7f3da61c                                                                                    About a minute ago   Running             etcd                        2                   73b9ea2bded5e
	
	* 
	* ==> coredns [5c7406ea20fc] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220601115855-16804
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220601115855-16804
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af273d6c1d2efba123f39c341ef4e1b2746b42f1
	                    minikube.k8s.io/name=embed-certs-20220601115855-16804
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T12_05_16_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 19:05:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220601115855-16804
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Jun 2022 19:06:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 19:06:26 +0000   Wed, 01 Jun 2022 19:05:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 19:06:26 +0000   Wed, 01 Jun 2022 19:05:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 19:06:26 +0000   Wed, 01 Jun 2022 19:05:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Jun 2022 19:06:26 +0000   Wed, 01 Jun 2022 19:06:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    embed-certs-20220601115855-16804
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	  System UUID:                9dffb622-e66d-49af-bc81-c172407d2bbc
	  Boot ID:                    60fb2c64-72ec-41ec-9cdf-c18d3fde7c60
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-lflsj                                     100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     60s
	  kube-system                 etcd-embed-certs-20220601115855-16804                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         72s
	  kube-system                 kube-apiserver-embed-certs-20220601115855-16804             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-controller-manager-embed-certs-20220601115855-16804    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-proxy-8pt2q                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kube-system                 kube-scheduler-embed-certs-20220601115855-16804             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 metrics-server-b955d9d8-fnr2z                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         58s
	  kube-system                 storage-provisioner                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-8d2ch                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kubernetes-dashboard        kubernetes-dashboard-8469778f77-n4ksx                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 58s   kube-proxy  
	  Normal  Starting                 73s   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  72s   kubelet     Node embed-certs-20220601115855-16804 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    72s   kubelet     Node embed-certs-20220601115855-16804 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     72s   kubelet     Node embed-certs-20220601115855-16804 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  72s   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                62s   kubelet     Node embed-certs-20220601115855-16804 status is now: NodeReady
	  Normal  Starting                 3s    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3s    kubelet     Node embed-certs-20220601115855-16804 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s    kubelet     Node embed-certs-20220601115855-16804 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s    kubelet     Node embed-certs-20220601115855-16804 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3s    kubelet     Node embed-certs-20220601115855-16804 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3s    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s    kubelet     Node embed-certs-20220601115855-16804 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [0c9e2554bdfe] <==
	* {"level":"info","ts":"2022-06-01T19:05:12.040Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2022-06-01T19:05:12.040Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2022-06-01T19:05:12.040Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-01T19:05:12.040Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2022-06-01T19:05:12.040Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-01T19:05:12.041Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:embed-certs-20220601115855-16804 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T19:05:12.041Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T19:05:12.041Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-01T19:05:12.042Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T19:05:12.042Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T19:05:12.043Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-06-01T19:05:12.046Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T19:05:12.046Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T19:05:12.046Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T19:05:12.054Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T19:05:12.054Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T19:05:30.133Z","caller":"traceutil/trace.go:171","msg":"trace[362881679] linearizableReadLoop","detail":"{readStateIndex:424; appliedIndex:424; }","duration":"105.350579ms","start":"2022-06-01T19:05:30.028Z","end":"2022-06-01T19:05:30.133Z","steps":["trace[362881679] 'read index received'  (duration: 105.330174ms)","trace[362881679] 'applied index is now lower than readState.Index'  (duration: 19.651µs)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T19:05:30.208Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"179.66537ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/kube-system/kube-dns-gdqt6\" ","response":"range_response_count:1 size:912"}
	{"level":"info","ts":"2022-06-01T19:05:30.208Z","caller":"traceutil/trace.go:171","msg":"trace[2006057421] range","detail":"{range_begin:/registry/endpointslices/kube-system/kube-dns-gdqt6; range_end:; response_count:1; response_revision:413; }","duration":"179.722305ms","start":"2022-06-01T19:05:30.028Z","end":"2022-06-01T19:05:30.208Z","steps":["trace[2006057421] 'agreement among raft nodes before linearized reading'  (duration: 105.485103ms)","trace[2006057421] 'range keys from in-memory index tree'  (duration: 74.1599ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-01T19:05:30.208Z","caller":"traceutil/trace.go:171","msg":"trace[487800730] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"179.715675ms","start":"2022-06-01T19:05:30.028Z","end":"2022-06-01T19:05:30.208Z","steps":["trace[487800730] 'process raft request'  (duration: 105.305107ms)","trace[487800730] 'compare'  (duration: 73.978449ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-01T19:05:30.208Z","caller":"traceutil/trace.go:171","msg":"trace[1425637086] transaction","detail":"{read_only:false; response_revision:415; number_of_response:1; }","duration":"178.192897ms","start":"2022-06-01T19:05:30.030Z","end":"2022-06-01T19:05:30.208Z","steps":["trace[1425637086] 'process raft request'  (duration: 177.993948ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-01T19:05:30.208Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"102.600148ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-64897985d-d4qsr\" ","response":"range_response_count:1 size:3462"}
	{"level":"info","ts":"2022-06-01T19:05:30.208Z","caller":"traceutil/trace.go:171","msg":"trace[834637519] range","detail":"{range_begin:/registry/pods/kube-system/coredns-64897985d-d4qsr; range_end:; response_count:1; response_revision:415; }","duration":"102.617859ms","start":"2022-06-01T19:05:30.105Z","end":"2022-06-01T19:05:30.208Z","steps":["trace[834637519] 'agreement among raft nodes before linearized reading'  (duration: 102.546921ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-01T19:05:30.208Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"177.630926ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/coredns-64897985d\" ","response":"range_response_count:1 size:3511"}
	{"level":"info","ts":"2022-06-01T19:05:30.208Z","caller":"traceutil/trace.go:171","msg":"trace[2014257552] range","detail":"{range_begin:/registry/replicasets/kube-system/coredns-64897985d; range_end:; response_count:1; response_revision:415; }","duration":"177.657381ms","start":"2022-06-01T19:05:30.030Z","end":"2022-06-01T19:05:30.208Z","steps":["trace[2014257552] 'agreement among raft nodes before linearized reading'  (duration: 177.63083ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  19:06:29 up  1:09,  0 users,  load average: 0.32, 0.40, 0.74
	Linux embed-certs-20220601115855-16804 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [001882f735bb] <==
	* I0601 19:05:14.948979       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0601 19:05:14.951256       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0601 19:05:14.951285       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0601 19:05:15.224929       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0601 19:05:15.256114       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0601 19:05:15.327099       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0601 19:05:15.331083       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0601 19:05:15.331997       1 controller.go:611] quota admission added evaluator for: endpoints
	I0601 19:05:15.334771       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0601 19:05:16.085091       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0601 19:05:16.718070       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0601 19:05:16.724833       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0601 19:05:16.737965       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0601 19:05:16.923379       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0601 19:05:29.193087       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0601 19:05:29.842617       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0601 19:05:30.893131       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0601 19:05:32.085311       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.102.54.92]
	E0601 19:05:32.118001       1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I0601 19:05:32.386482       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.100.95.217]
	I0601 19:05:32.399885       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.110.88.214]
	W0601 19:05:32.716937       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 19:05:32.717080       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 19:05:32.717104       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [6d8d06f0c3f7] <==
	* I0601 19:05:30.304238       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-d4qsr"
	I0601 19:05:31.709586       1 event.go:294] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-b955d9d8 to 1"
	I0601 19:05:31.721993       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-b955d9d8-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0601 19:05:31.794640       1 replica_set.go:536] sync "kube-system/metrics-server-b955d9d8" failed with pods "metrics-server-b955d9d8-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0601 19:05:31.802307       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-b955d9d8-fnr2z"
	I0601 19:05:32.013092       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-56974995fc to 1"
	I0601 19:05:32.022400       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 19:05:32.034590       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 19:05:32.035398       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8469778f77 to 1"
	I0601 19:05:32.089045       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 19:05:32.089094       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 19:05:32.095238       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0601 19:05:32.099710       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 19:05:32.103695       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 19:05:32.103866       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0601 19:05:32.117161       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0601 19:05:32.117274       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 19:05:32.117298       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 19:05:32.117309       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 19:05:32.124064       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 19:05:32.124368       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 19:05:32.189822       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-n4ksx"
	I0601 19:05:32.194432       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-8d2ch"
	E0601 19:06:26.172673       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 19:06:26.179913       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [218d974cd4bd] <==
	* I0601 19:05:30.705550       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0601 19:05:30.705611       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0601 19:05:30.705631       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 19:05:30.889606       1 server_others.go:206] "Using iptables Proxier"
	I0601 19:05:30.889654       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 19:05:30.889662       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 19:05:30.889694       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 19:05:30.890039       1 server.go:656] "Version info" version="v1.23.6"
	I0601 19:05:30.890692       1 config.go:226] "Starting endpoint slice config controller"
	I0601 19:05:30.890709       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 19:05:30.890747       1 config.go:317] "Starting service config controller"
	I0601 19:05:30.890750       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 19:05:30.990890       1 shared_informer.go:247] Caches are synced for service config 
	I0601 19:05:30.990922       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [8eccce42310b] <==
	* W0601 19:05:14.026546       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 19:05:14.026680       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0601 19:05:14.027375       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 19:05:14.027389       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0601 19:05:14.027458       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 19:05:14.027467       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0601 19:05:14.027953       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 19:05:14.028008       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0601 19:05:14.028442       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 19:05:14.028592       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 19:05:14.028620       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 19:05:14.028633       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0601 19:05:14.028810       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0601 19:05:14.028904       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0601 19:05:14.028755       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0601 19:05:14.029083       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0601 19:05:14.861883       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 19:05:14.861937       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0601 19:05:14.913852       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0601 19:05:14.913920       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0601 19:05:14.992863       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0601 19:05:14.992899       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0601 19:05:15.092244       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 19:05:15.092287       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0601 19:05:17.117866       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 19:00:01 UTC, end at Wed 2022-06-01 19:06:30 UTC. --
	Jun 01 19:06:27 embed-certs-20220601115855-16804 kubelet[7144]: I0601 19:06:27.771039    7144 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f9e71c1d-677f-4843-9310-a42068423370-tmp\") pod \"storage-provisioner\" (UID: \"f9e71c1d-677f-4843-9310-a42068423370\") " pod="kube-system/storage-provisioner"
	Jun 01 19:06:27 embed-certs-20220601115855-16804 kubelet[7144]: I0601 19:06:27.771123    7144 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5j7j\" (UniqueName: \"kubernetes.io/projected/f9e71c1d-677f-4843-9310-a42068423370-kube-api-access-c5j7j\") pod \"storage-provisioner\" (UID: \"f9e71c1d-677f-4843-9310-a42068423370\") " pod="kube-system/storage-provisioner"
	Jun 01 19:06:27 embed-certs-20220601115855-16804 kubelet[7144]: I0601 19:06:27.771165    7144 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nvr9\" (UniqueName: \"kubernetes.io/projected/aaaea80d-aee2-4f43-8ffe-e70aa5fe0b2f-kube-api-access-8nvr9\") pod \"metrics-server-b955d9d8-fnr2z\" (UID: \"aaaea80d-aee2-4f43-8ffe-e70aa5fe0b2f\") " pod="kube-system/metrics-server-b955d9d8-fnr2z"
	Jun 01 19:06:27 embed-certs-20220601115855-16804 kubelet[7144]: I0601 19:06:27.771348    7144 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc613f9b-8ed7-4c30-8a2e-aef8e9c601cb-lib-modules\") pod \"kube-proxy-8pt2q\" (UID: \"fc613f9b-8ed7-4c30-8a2e-aef8e9c601cb\") " pod="kube-system/kube-proxy-8pt2q"
	Jun 01 19:06:27 embed-certs-20220601115855-16804 kubelet[7144]: I0601 19:06:27.771437    7144 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgxd8\" (UniqueName: \"kubernetes.io/projected/26ee83bc-3db8-4f98-8930-1dbcf4691729-kube-api-access-dgxd8\") pod \"dashboard-metrics-scraper-56974995fc-8d2ch\" (UID: \"26ee83bc-3db8-4f98-8930-1dbcf4691729\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-8d2ch"
	Jun 01 19:06:27 embed-certs-20220601115855-16804 kubelet[7144]: I0601 19:06:27.771476    7144 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb96d\" (UniqueName: \"kubernetes.io/projected/fc613f9b-8ed7-4c30-8a2e-aef8e9c601cb-kube-api-access-gb96d\") pod \"kube-proxy-8pt2q\" (UID: \"fc613f9b-8ed7-4c30-8a2e-aef8e9c601cb\") " pod="kube-system/kube-proxy-8pt2q"
	Jun 01 19:06:27 embed-certs-20220601115855-16804 kubelet[7144]: I0601 19:06:27.771509    7144 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/26ee83bc-3db8-4f98-8930-1dbcf4691729-tmp-volume\") pod \"dashboard-metrics-scraper-56974995fc-8d2ch\" (UID: \"26ee83bc-3db8-4f98-8930-1dbcf4691729\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-8d2ch"
	Jun 01 19:06:27 embed-certs-20220601115855-16804 kubelet[7144]: I0601 19:06:27.771539    7144 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fc613f9b-8ed7-4c30-8a2e-aef8e9c601cb-kube-proxy\") pod \"kube-proxy-8pt2q\" (UID: \"fc613f9b-8ed7-4c30-8a2e-aef8e9c601cb\") " pod="kube-system/kube-proxy-8pt2q"
	Jun 01 19:06:27 embed-certs-20220601115855-16804 kubelet[7144]: I0601 19:06:27.771556    7144 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc613f9b-8ed7-4c30-8a2e-aef8e9c601cb-xtables-lock\") pod \"kube-proxy-8pt2q\" (UID: \"fc613f9b-8ed7-4c30-8a2e-aef8e9c601cb\") " pod="kube-system/kube-proxy-8pt2q"
	Jun 01 19:06:27 embed-certs-20220601115855-16804 kubelet[7144]: I0601 19:06:27.771595    7144 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3cc3702d-8d03-4400-951b-ad67ad94d3dc-config-volume\") pod \"coredns-64897985d-lflsj\" (UID: \"3cc3702d-8d03-4400-951b-ad67ad94d3dc\") " pod="kube-system/coredns-64897985d-lflsj"
	Jun 01 19:06:27 embed-certs-20220601115855-16804 kubelet[7144]: I0601 19:06:27.771642    7144 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k844p\" (UniqueName: \"kubernetes.io/projected/3cc3702d-8d03-4400-951b-ad67ad94d3dc-kube-api-access-k844p\") pod \"coredns-64897985d-lflsj\" (UID: \"3cc3702d-8d03-4400-951b-ad67ad94d3dc\") " pod="kube-system/coredns-64897985d-lflsj"
	Jun 01 19:06:27 embed-certs-20220601115855-16804 kubelet[7144]: I0601 19:06:27.771703    7144 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6b9cf5f5-152e-49cb-9646-876836323cd4-tmp-volume\") pod \"kubernetes-dashboard-8469778f77-n4ksx\" (UID: \"6b9cf5f5-152e-49cb-9646-876836323cd4\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-n4ksx"
	Jun 01 19:06:27 embed-certs-20220601115855-16804 kubelet[7144]: I0601 19:06:27.771839    7144 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wstfn\" (UniqueName: \"kubernetes.io/projected/6b9cf5f5-152e-49cb-9646-876836323cd4-kube-api-access-wstfn\") pod \"kubernetes-dashboard-8469778f77-n4ksx\" (UID: \"6b9cf5f5-152e-49cb-9646-876836323cd4\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-n4ksx"
	Jun 01 19:06:27 embed-certs-20220601115855-16804 kubelet[7144]: I0601 19:06:27.771897    7144 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/aaaea80d-aee2-4f43-8ffe-e70aa5fe0b2f-tmp-dir\") pod \"metrics-server-b955d9d8-fnr2z\" (UID: \"aaaea80d-aee2-4f43-8ffe-e70aa5fe0b2f\") " pod="kube-system/metrics-server-b955d9d8-fnr2z"
	Jun 01 19:06:27 embed-certs-20220601115855-16804 kubelet[7144]: I0601 19:06:27.771923    7144 reconciler.go:157] "Reconciler: start to sync state"
	Jun 01 19:06:28 embed-certs-20220601115855-16804 kubelet[7144]: I0601 19:06:28.946559    7144 request.go:665] Waited for 1.138589098s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jun 01 19:06:28 embed-certs-20220601115855-16804 kubelet[7144]: E0601 19:06:28.952182    7144 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-embed-certs-20220601115855-16804\" already exists" pod="kube-system/kube-controller-manager-embed-certs-20220601115855-16804"
	Jun 01 19:06:29 embed-certs-20220601115855-16804 kubelet[7144]: E0601 19:06:29.179916    7144 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-scheduler-embed-certs-20220601115855-16804\" already exists" pod="kube-system/kube-scheduler-embed-certs-20220601115855-16804"
	Jun 01 19:06:29 embed-certs-20220601115855-16804 kubelet[7144]: E0601 19:06:29.351888    7144 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"etcd-embed-certs-20220601115855-16804\" already exists" pod="kube-system/etcd-embed-certs-20220601115855-16804"
	Jun 01 19:06:29 embed-certs-20220601115855-16804 kubelet[7144]: E0601 19:06:29.619302    7144 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-apiserver-embed-certs-20220601115855-16804\" already exists" pod="kube-system/kube-apiserver-embed-certs-20220601115855-16804"
	Jun 01 19:06:30 embed-certs-20220601115855-16804 kubelet[7144]: I0601 19:06:30.151536    7144 scope.go:110] "RemoveContainer" containerID="597493054b80ac3015a530b4cdc9e41f26c7d37ad16eec9806b85bd595acf49d"
	Jun 01 19:06:30 embed-certs-20220601115855-16804 kubelet[7144]: E0601 19:06:30.546224    7144 remote_image.go:216] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 01 19:06:30 embed-certs-20220601115855-16804 kubelet[7144]: E0601 19:06:30.546289    7144 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 01 19:06:30 embed-certs-20220601115855-16804 kubelet[7144]: E0601 19:06:30.546411    7144 kuberuntime_manager.go:919] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8nvr9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeH
andler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMes
sagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-b955d9d8-fnr2z_kube-system(aaaea80d-aee2-4f43-8ffe-e70aa5fe0b2f): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jun 01 19:06:30 embed-certs-20220601115855-16804 kubelet[7144]: E0601 19:06:30.546544    7144 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-b955d9d8-fnr2z" podUID=aaaea80d-aee2-4f43-8ffe-e70aa5fe0b2f
	
	* 
	* ==> kubernetes-dashboard [40a3e61ea1d7] <==
	* 2022/06/01 19:05:38 Using namespace: kubernetes-dashboard
	2022/06/01 19:05:38 Using in-cluster config to connect to apiserver
	2022/06/01 19:05:38 Using secret token for csrf signing
	2022/06/01 19:05:38 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/06/01 19:05:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/06/01 19:05:38 Successful initial request to the apiserver, version: v1.23.6
	2022/06/01 19:05:38 Generating JWE encryption key
	2022/06/01 19:05:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/06/01 19:05:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/06/01 19:05:38 Initializing JWE encryption key from synchronized object
	2022/06/01 19:05:38 Creating in-cluster Sidecar client
	2022/06/01 19:05:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/01 19:05:38 Serving insecurely on HTTP port: 9090
	2022/06/01 19:06:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/01 19:05:38 Starting overwatch
	
	* 
	* ==> storage-provisioner [415c0adaff0b] <==
	* I0601 19:05:32.666541       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0601 19:05:32.676286       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0601 19:05:32.676372       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0601 19:05:32.686506       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0601 19:05:32.686726       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20220601115855-16804_6ba63202-3d2b-48a2-ab36-0a30f813ae17!
	I0601 19:05:32.686928       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"53ecb739-2455-4707-9163-e52e96cbeefc", APIVersion:"v1", ResourceVersion:"556", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20220601115855-16804_6ba63202-3d2b-48a2-ab36-0a30f813ae17 became leader
	I0601 19:05:32.787748       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20220601115855-16804_6ba63202-3d2b-48a2-ab36-0a30f813ae17!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220601115855-16804 -n embed-certs-20220601115855-16804
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220601115855-16804 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-b955d9d8-fnr2z
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220601115855-16804 describe pod metrics-server-b955d9d8-fnr2z
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220601115855-16804 describe pod metrics-server-b955d9d8-fnr2z: exit status 1 (320.549431ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-b955d9d8-fnr2z" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220601115855-16804 describe pod metrics-server-b955d9d8-fnr2z: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220601115855-16804
helpers_test.go:235: (dbg) docker inspect embed-certs-20220601115855-16804:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "daff3bf0eba492c90056ce45176d631d185b87d88a61717d4e753c328f7d8784",
	        "Created": "2022-06-01T18:59:02.28302225Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 235321,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T19:00:00.971874149Z",
	            "FinishedAt": "2022-06-01T18:59:59.00759091Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/daff3bf0eba492c90056ce45176d631d185b87d88a61717d4e753c328f7d8784/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/daff3bf0eba492c90056ce45176d631d185b87d88a61717d4e753c328f7d8784/hostname",
	        "HostsPath": "/var/lib/docker/containers/daff3bf0eba492c90056ce45176d631d185b87d88a61717d4e753c328f7d8784/hosts",
	        "LogPath": "/var/lib/docker/containers/daff3bf0eba492c90056ce45176d631d185b87d88a61717d4e753c328f7d8784/daff3bf0eba492c90056ce45176d631d185b87d88a61717d4e753c328f7d8784-json.log",
	        "Name": "/embed-certs-20220601115855-16804",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220601115855-16804:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220601115855-16804",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/07129088b088e47c009d6b43ee52c51985bc4af006235bc2ac0c38d05bac4b16-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/07129088b088e47c009d6b43ee52c51985bc4af006235bc2ac0c38d05bac4b16/merged",
	                "UpperDir": "/var/lib/docker/overlay2/07129088b088e47c009d6b43ee52c51985bc4af006235bc2ac0c38d05bac4b16/diff",
	                "WorkDir": "/var/lib/docker/overlay2/07129088b088e47c009d6b43ee52c51985bc4af006235bc2ac0c38d05bac4b16/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220601115855-16804",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220601115855-16804/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220601115855-16804",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220601115855-16804",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220601115855-16804",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5be6800116ac1a8a8437205abd9ac248a5c246bb27fddf3f127842a92f323157",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60747"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60748"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60749"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60745"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60746"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5be6800116ac",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220601115855-16804": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "daff3bf0eba4",
	                        "embed-certs-20220601115855-16804"
	                    ],
	                    "NetworkID": "14104cac19a3970344e7e464fdc2a9525956f5dfe25aebc2916d1b0f0bef30de",
	                    "EndpointID": "3b1416234cb89aaef25eda8d72cf7dbc0b022d15bfd5613484628b57f77b3ac3",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220601115855-16804 -n embed-certs-20220601115855-16804
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-20220601115855-16804 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p embed-certs-20220601115855-16804 logs -n 25: (2.73367447s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-----------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                 Profile                 |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-----------------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p                                                | enable-default-cni-20220601113004-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:50 PDT | 01 Jun 22 11:50 PDT |
	|         | enable-default-cni-20220601113004-16804           |                                         |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:50 PDT | 01 Jun 22 11:51 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |                |                     |                     |
	|         | --driver=docker                                   |                                         |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                         |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:51 PDT | 01 Jun 22 11:51 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |                |                     |                     |
	| stop    | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:51 PDT | 01 Jun 22 11:52 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |                |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:52 PDT | 01 Jun 22 11:52 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |                |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220601114806-16804    | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:53 PDT | 01 Jun 22 11:53 PDT |
	|         | old-k8s-version-20220601114806-16804              |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |                |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220601114806-16804    | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:53 PDT | 01 Jun 22 11:53 PDT |
	|         | old-k8s-version-20220601114806-16804              |                                         |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |                |                     |                     |
	| start   | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:52 PDT | 01 Jun 22 11:57 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |                |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |                |                     |                     |
	|         | --driver=docker                                   |                                         |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                         |         |                |                     |                     |
	| ssh     | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                         |         |                |                     |                     |
	| pause   | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |                |                     |                     |
	| unpause | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |                |                     |                     |
	| logs    | no-preload-20220601115057-16804                   | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | logs -n 25                                        |                                         |         |                |                     |                     |
	| logs    | no-preload-20220601115057-16804                   | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | logs -n 25                                        |                                         |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220601115057-16804         | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | no-preload-20220601115057-16804                   |                                         |         |                |                     |                     |
	| start   | -p                                                | embed-certs-20220601115855-16804        | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:59 PDT |
	|         | embed-certs-20220601115855-16804                  |                                         |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |                |                     |                     |
	|         | --wait=true --embed-certs                         |                                         |         |                |                     |                     |
	|         | --driver=docker                                   |                                         |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                         |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220601115855-16804        | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:59 PDT | 01 Jun 22 11:59 PDT |
	|         | embed-certs-20220601115855-16804                  |                                         |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |                |                     |                     |
	| stop    | -p                                                | embed-certs-20220601115855-16804        | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:59 PDT | 01 Jun 22 11:59 PDT |
	|         | embed-certs-20220601115855-16804                  |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |                |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220601115855-16804        | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:59 PDT | 01 Jun 22 11:59 PDT |
	|         | embed-certs-20220601115855-16804                  |                                         |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |                |                     |                     |
	| logs    | old-k8s-version-20220601114806-16804              | old-k8s-version-20220601114806-16804    | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:01 PDT | 01 Jun 22 12:02 PDT |
	|         | logs -n 25                                        |                                         |         |                |                     |                     |
	| start   | -p                                                | embed-certs-20220601115855-16804        | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:59 PDT | 01 Jun 22 12:05 PDT |
	|         | embed-certs-20220601115855-16804                  |                                         |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |                |                     |                     |
	|         | --wait=true --embed-certs                         |                                         |         |                |                     |                     |
	|         | --driver=docker                                   |                                         |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                         |         |                |                     |                     |
	| ssh     | -p                                                | embed-certs-20220601115855-16804        | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:05 PDT | 01 Jun 22 12:05 PDT |
	|         | embed-certs-20220601115855-16804                  |                                         |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                         |         |                |                     |                     |
	| pause   | -p                                                | embed-certs-20220601115855-16804        | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:05 PDT | 01 Jun 22 12:05 PDT |
	|         | embed-certs-20220601115855-16804                  |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |                |                     |                     |
	| unpause | -p                                                | embed-certs-20220601115855-16804        | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:06 PDT |
	|         | embed-certs-20220601115855-16804                  |                                         |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |                |                     |                     |
	| logs    | embed-certs-20220601115855-16804                  | embed-certs-20220601115855-16804        | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:06 PDT |
	|         | logs -n 25                                        |                                         |         |                |                     |                     |
	|---------|---------------------------------------------------|-----------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 11:59:59
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 11:59:59.653204   28829 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:59:59.653367   28829 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:59:59.653373   28829 out.go:309] Setting ErrFile to fd 2...
	I0601 11:59:59.653377   28829 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:59:59.653471   28829 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:59:59.653745   28829 out.go:303] Setting JSON to false
	I0601 11:59:59.668907   28829 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":8969,"bootTime":1654101030,"procs":354,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 11:59:59.669021   28829 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:59:59.692330   28829 out.go:177] * [embed-certs-20220601115855-16804] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 11:59:59.734931   28829 notify.go:193] Checking for updates...
	I0601 11:59:59.755632   28829 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:59:59.776895   28829 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:59:59.797902   28829 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 11:59:59.818891   28829 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:59:59.840237   28829 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:59:58.294591   28319 out.go:204]   - Booting up control plane ...
	I0601 11:59:59.862690   28829 config.go:178] Loaded profile config "embed-certs-20220601115855-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:59:59.863349   28829 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:59:59.936186   28829 docker.go:137] docker version: linux-20.10.14
	I0601 11:59:59.936326   28829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 12:00:00.071723   28829 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 19:00:00.020706131 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 12:00:00.115439   28829 out.go:177] * Using the docker driver based on existing profile
	I0601 12:00:00.136972   28829 start.go:284] selected driver: docker
	I0601 12:00:00.137021   28829 start.go:806] validating driver "docker" against &{Name:embed-certs-20220601115855-16804 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601115855-16804 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedu
ledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 12:00:00.137102   28829 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 12:00:00.139236   28829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 12:00:00.273893   28829 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 19:00:00.221448867 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 12:00:00.274092   28829 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 12:00:00.274108   28829 cni.go:95] Creating CNI manager for ""
	I0601 12:00:00.274119   28829 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 12:00:00.274130   28829 start_flags.go:306] config:
	{Name:embed-certs-20220601115855-16804 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601115855-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cl
uster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 12:00:00.317950   28829 out.go:177] * Starting control plane node embed-certs-20220601115855-16804 in cluster embed-certs-20220601115855-16804
	I0601 12:00:00.339702   28829 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 12:00:00.361619   28829 out.go:177] * Pulling base image ...
	I0601 12:00:00.403754   28829 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 12:00:00.403769   28829 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 12:00:00.403845   28829 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 12:00:00.403870   28829 cache.go:57] Caching tarball of preloaded images
	I0601 12:00:00.404060   28829 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 12:00:00.404081   28829 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 12:00:00.405150   28829 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601115855-16804/config.json ...
	I0601 12:00:00.473577   28829 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 12:00:00.473612   28829 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 12:00:00.473622   28829 cache.go:206] Successfully downloaded all kic artifacts
	I0601 12:00:00.473678   28829 start.go:352] acquiring machines lock for embed-certs-20220601115855-16804: {Name:mk196f5f4a80c33b64e542dea375820ba3ed670b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 12:00:00.473769   28829 start.go:356] acquired machines lock for "embed-certs-20220601115855-16804" in 61.526µs
	I0601 12:00:00.473799   28829 start.go:94] Skipping create...Using existing machine configuration
	I0601 12:00:00.473808   28829 fix.go:55] fixHost starting: 
	I0601 12:00:00.474098   28829 cli_runner.go:164] Run: docker container inspect embed-certs-20220601115855-16804 --format={{.State.Status}}
	I0601 12:00:00.546983   28829 fix.go:103] recreateIfNeeded on embed-certs-20220601115855-16804: state=Stopped err=<nil>
	W0601 12:00:00.547020   28829 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 12:00:00.590598   28829 out.go:177] * Restarting existing docker container for "embed-certs-20220601115855-16804" ...
	I0601 12:00:00.611803   28829 cli_runner.go:164] Run: docker start embed-certs-20220601115855-16804
	I0601 12:00:00.981301   28829 cli_runner.go:164] Run: docker container inspect embed-certs-20220601115855-16804 --format={{.State.Status}}
	I0601 12:00:01.057530   28829 kic.go:416] container "embed-certs-20220601115855-16804" state is running.
	I0601 12:00:01.058483   28829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220601115855-16804
	I0601 12:00:01.138894   28829 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601115855-16804/config.json ...
	I0601 12:00:01.139319   28829 machine.go:88] provisioning docker machine ...
	I0601 12:00:01.139343   28829 ubuntu.go:169] provisioning hostname "embed-certs-20220601115855-16804"
	I0601 12:00:01.139423   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:01.220339   28829 main.go:134] libmachine: Using SSH client type: native
	I0601 12:00:01.220539   28829 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 60747 <nil> <nil>}
	I0601 12:00:01.220567   28829 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220601115855-16804 && echo "embed-certs-20220601115855-16804" | sudo tee /etc/hostname
	I0601 12:00:01.352125   28829 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220601115855-16804
	
	I0601 12:00:01.352207   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:01.427439   28829 main.go:134] libmachine: Using SSH client type: native
	I0601 12:00:01.427585   28829 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 60747 <nil> <nil>}
	I0601 12:00:01.427600   28829 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220601115855-16804' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220601115855-16804/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220601115855-16804' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 12:00:01.544609   28829 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 12:00:01.544628   28829 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 12:00:01.544653   28829 ubuntu.go:177] setting up certificates
	I0601 12:00:01.544660   28829 provision.go:83] configureAuth start
	I0601 12:00:01.544721   28829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220601115855-16804
	I0601 12:00:01.621530   28829 provision.go:138] copyHostCerts
	I0601 12:00:01.621625   28829 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 12:00:01.621636   28829 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 12:00:01.621742   28829 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 12:00:01.621969   28829 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 12:00:01.621980   28829 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 12:00:01.622043   28829 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 12:00:01.622216   28829 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 12:00:01.622223   28829 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 12:00:01.622288   28829 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1675 bytes)
	I0601 12:00:01.622404   28829 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220601115855-16804 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220601115855-16804]
	I0601 12:00:01.850945   28829 provision.go:172] copyRemoteCerts
	I0601 12:00:01.851024   28829 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 12:00:01.851079   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:01.929859   28829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60747 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601115855-16804/id_rsa Username:docker}
	I0601 12:00:02.016851   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0601 12:00:02.037368   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 12:00:02.055389   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0601 12:00:02.077593   28829 provision.go:86] duration metric: configureAuth took 532.923535ms
	I0601 12:00:02.077613   28829 ubuntu.go:193] setting minikube options for container-runtime
	I0601 12:00:02.077867   28829 config.go:178] Loaded profile config "embed-certs-20220601115855-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 12:00:02.077925   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:02.152444   28829 main.go:134] libmachine: Using SSH client type: native
	I0601 12:00:02.152592   28829 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 60747 <nil> <nil>}
	I0601 12:00:02.152602   28829 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 12:00:02.272393   28829 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 12:00:02.272406   28829 ubuntu.go:71] root file system type: overlay
	I0601 12:00:02.272550   28829 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 12:00:02.272624   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:02.345039   28829 main.go:134] libmachine: Using SSH client type: native
	I0601 12:00:02.345239   28829 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 60747 <nil> <nil>}
	I0601 12:00:02.345322   28829 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 12:00:02.473536   28829 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 12:00:02.473632   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:02.547006   28829 main.go:134] libmachine: Using SSH client type: native
	I0601 12:00:02.547206   28829 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 60747 <nil> <nil>}
	I0601 12:00:02.547219   28829 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 12:00:02.668285   28829 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 12:00:02.668306   28829 machine.go:91] provisioned docker machine in 1.528998011s
	I0601 12:00:02.668317   28829 start.go:306] post-start starting for "embed-certs-20220601115855-16804" (driver="docker")
	I0601 12:00:02.668321   28829 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 12:00:02.668376   28829 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 12:00:02.668419   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:02.744308   28829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60747 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601115855-16804/id_rsa Username:docker}
	I0601 12:00:02.832162   28829 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 12:00:02.835671   28829 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 12:00:02.835684   28829 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 12:00:02.835691   28829 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 12:00:02.835696   28829 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 12:00:02.835704   28829 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 12:00:02.835822   28829 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 12:00:02.835969   28829 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem -> 168042.pem in /etc/ssl/certs
	I0601 12:00:02.836134   28829 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 12:00:02.843255   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /etc/ssl/certs/168042.pem (1708 bytes)
	I0601 12:00:02.861502   28829 start.go:309] post-start completed in 193.177974ms
	I0601 12:00:02.861575   28829 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 12:00:02.861682   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:02.936096   28829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60747 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601115855-16804/id_rsa Username:docker}
	I0601 12:00:03.020138   28829 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 12:00:03.024381   28829 fix.go:57] fixHost completed within 2.550601276s
	I0601 12:00:03.024393   28829 start.go:81] releasing machines lock for "embed-certs-20220601115855-16804", held for 2.550641205s
	I0601 12:00:03.024471   28829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220601115855-16804
	I0601 12:00:03.097794   28829 ssh_runner.go:195] Run: systemctl --version
	I0601 12:00:03.097795   28829 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 12:00:03.097869   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:03.097902   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:03.176095   28829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60747 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601115855-16804/id_rsa Username:docker}
	I0601 12:00:03.179173   28829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60747 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601115855-16804/id_rsa Username:docker}
	I0601 12:00:03.393941   28829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 12:00:03.405857   28829 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 12:00:03.415824   28829 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 12:00:03.415875   28829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 12:00:03.425026   28829 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 12:00:03.437823   28829 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 12:00:03.518418   28829 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 12:00:03.586389   28829 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 12:00:03.597266   28829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 12:00:03.669442   28829 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 12:00:03.679546   28829 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 12:00:03.715983   28829 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 12:00:03.793958   28829 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0601 12:00:03.794135   28829 cli_runner.go:164] Run: docker exec -t embed-certs-20220601115855-16804 dig +short host.docker.internal
	I0601 12:00:03.928920   28829 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 12:00:03.929017   28829 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 12:00:03.933477   28829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 12:00:03.943415   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:04.016419   28829 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 12:00:04.016501   28829 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 12:00:04.048821   28829 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0601 12:00:04.048836   28829 docker.go:541] Images already preloaded, skipping extraction
	I0601 12:00:04.048899   28829 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 12:00:04.079435   28829 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0601 12:00:04.079457   28829 cache_images.go:84] Images are preloaded, skipping loading
	I0601 12:00:04.079567   28829 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 12:00:04.154405   28829 cni.go:95] Creating CNI manager for ""
	I0601 12:00:04.154416   28829 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 12:00:04.154426   28829 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 12:00:04.154437   28829 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220601115855-16804 NodeName:embed-certs-20220601115855-16804 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:
/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 12:00:04.154550   28829 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "embed-certs-20220601115855-16804"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 12:00:04.154614   28829 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=embed-certs-20220601115855-16804 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601115855-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 12:00:04.154674   28829 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 12:00:04.162496   28829 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 12:00:04.162605   28829 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 12:00:04.169803   28829 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (358 bytes)
	I0601 12:00:04.182475   28829 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 12:00:04.196040   28829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2053 bytes)
	I0601 12:00:04.210349   28829 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0601 12:00:04.214249   28829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 12:00:04.224887   28829 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601115855-16804 for IP: 192.168.58.2
	I0601 12:00:04.225006   28829 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 12:00:04.225070   28829 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 12:00:04.225156   28829 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601115855-16804/client.key
	I0601 12:00:04.225217   28829 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601115855-16804/apiserver.key.cee25041
	I0601 12:00:04.225268   28829 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601115855-16804/proxy-client.key
	I0601 12:00:04.225483   28829 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem (1338 bytes)
	W0601 12:00:04.225526   28829 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804_empty.pem, impossibly tiny 0 bytes
	I0601 12:00:04.225542   28829 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1675 bytes)
	I0601 12:00:04.225573   28829 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 12:00:04.225606   28829 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 12:00:04.225635   28829 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1675 bytes)
	I0601 12:00:04.225702   28829 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem (1708 bytes)
	I0601 12:00:04.226272   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601115855-16804/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 12:00:04.245065   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601115855-16804/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 12:00:04.264844   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601115855-16804/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 12:00:04.283813   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/embed-certs-20220601115855-16804/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0601 12:00:04.302400   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 12:00:04.320094   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 12:00:04.337340   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 12:00:04.355164   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0601 12:00:04.372566   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 12:00:04.390758   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem --> /usr/share/ca-certificates/16804.pem (1338 bytes)
	I0601 12:00:04.407937   28829 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /usr/share/ca-certificates/168042.pem (1708 bytes)
	I0601 12:00:04.425147   28829 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 12:00:04.438402   28829 ssh_runner.go:195] Run: openssl version
	I0601 12:00:04.444064   28829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 12:00:04.452131   28829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 12:00:04.456181   28829 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0601 12:00:04.456224   28829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 12:00:04.461511   28829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 12:00:04.468902   28829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16804.pem && ln -fs /usr/share/ca-certificates/16804.pem /etc/ssl/certs/16804.pem"
	I0601 12:00:04.476746   28829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16804.pem
	I0601 12:00:04.480878   28829 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 18:01 /usr/share/ca-certificates/16804.pem
	I0601 12:00:04.480926   28829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16804.pem
	I0601 12:00:04.486478   28829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16804.pem /etc/ssl/certs/51391683.0"
	I0601 12:00:04.493830   28829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168042.pem && ln -fs /usr/share/ca-certificates/168042.pem /etc/ssl/certs/168042.pem"
	I0601 12:00:04.501614   28829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168042.pem
	I0601 12:00:04.505599   28829 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 18:01 /usr/share/ca-certificates/168042.pem
	I0601 12:00:04.505640   28829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168042.pem
	I0601 12:00:04.511112   28829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168042.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 12:00:04.518272   28829 kubeadm.go:395] StartCluster: {Name:embed-certs-20220601115855-16804 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220601115855-16804 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expose
dPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 12:00:04.518372   28829 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 12:00:04.546843   28829 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 12:00:04.554437   28829 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 12:00:04.554453   28829 kubeadm.go:626] restartCluster start
	I0601 12:00:04.554494   28829 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 12:00:04.561477   28829 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:04.561586   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:00:04.636533   28829 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220601115855-16804" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 12:00:04.636800   28829 kubeconfig.go:127] "embed-certs-20220601115855-16804" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 12:00:04.637127   28829 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk924f4ba24fa74a0cb052299e0cc4e825b209a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 12:00:04.638462   28829 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 12:00:04.646150   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:04.646199   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:04.654404   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:04.877249   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:04.877380   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:04.888485   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:05.077954   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:05.078102   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:05.090777   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:05.277957   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:05.278185   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:05.288604   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:05.476559   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:05.476656   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:05.488394   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:05.677991   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:05.678216   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:05.689348   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:05.876473   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:05.876581   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:05.887319   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:06.078192   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:06.078404   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:06.088967   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:06.275971   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:06.276084   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:06.286277   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:06.476564   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:06.476653   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:06.487710   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:06.677961   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:06.678149   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:06.688765   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:06.878002   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:06.878195   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:06.888550   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:07.075946   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:07.076132   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:07.087777   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:07.276117   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:07.276185   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:07.284689   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:07.477252   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:07.477434   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:07.488159   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:07.677742   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:07.677844   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:07.688190   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:07.688199   28829 api_server.go:165] Checking apiserver status ...
	I0601 12:00:07.688241   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:00:07.696107   28829 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:07.696118   28829 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 12:00:07.696125   28829 kubeadm.go:1092] stopping kube-system containers ...
	I0601 12:00:07.696181   28829 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 12:00:07.726742   28829 docker.go:442] Stopping containers: [54f727789abd 1a421477b475 d34c5263066b 4b5d8c649cd9 54ff8c39a3a3 d7c01b3e7bd3 aff02a265852 26c16b34697b 61e2850c4dc2 5c57a813ff5a f842c60a2bc5 e84f942430d3 8fa7e200ea41 d699653d0b64 0338f069b9af 8ea64f1a925b]
	I0601 12:00:07.726812   28829 ssh_runner.go:195] Run: docker stop 54f727789abd 1a421477b475 d34c5263066b 4b5d8c649cd9 54ff8c39a3a3 d7c01b3e7bd3 aff02a265852 26c16b34697b 61e2850c4dc2 5c57a813ff5a f842c60a2bc5 e84f942430d3 8fa7e200ea41 d699653d0b64 0338f069b9af 8ea64f1a925b
	I0601 12:00:07.758600   28829 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 12:00:07.769183   28829 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 12:00:07.777276   28829 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun  1 18:59 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jun  1 18:59 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Jun  1 18:59 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jun  1 18:59 /etc/kubernetes/scheduler.conf
	
	I0601 12:00:07.777325   28829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0601 12:00:07.785105   28829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0601 12:00:07.792774   28829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0601 12:00:07.800094   28829 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:07.800141   28829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0601 12:00:07.806961   28829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0601 12:00:07.814145   28829 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:00:07.814256   28829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0601 12:00:07.821393   28829 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 12:00:07.829055   28829 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 12:00:07.829066   28829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:00:07.875534   28829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:00:08.943797   28829 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.068258535s)
	I0601 12:00:08.943827   28829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:00:09.070381   28829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:00:09.117719   28829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:00:09.164707   28829 api_server.go:51] waiting for apiserver process to appear ...
	I0601 12:00:09.164770   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:00:09.676929   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:00:10.174847   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:00:10.675184   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:00:10.725618   28829 api_server.go:71] duration metric: took 1.560936283s to wait for apiserver process to appear ...
	I0601 12:00:10.725639   28829 api_server.go:87] waiting for apiserver healthz status ...
	I0601 12:00:10.725650   28829 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60746/healthz ...
	I0601 12:00:13.229293   28829 api_server.go:266] https://127.0.0.1:60746/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0601 12:00:13.229314   28829 api_server.go:102] status: https://127.0.0.1:60746/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0601 12:00:13.731491   28829 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60746/healthz ...
	I0601 12:00:13.739444   28829 api_server.go:266] https://127.0.0.1:60746/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 12:00:13.739457   28829 api_server.go:102] status: https://127.0.0.1:60746/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 12:00:14.229657   28829 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60746/healthz ...
	I0601 12:00:14.235866   28829 api_server.go:266] https://127.0.0.1:60746/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 12:00:14.235887   28829 api_server.go:102] status: https://127.0.0.1:60746/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 12:00:14.729449   28829 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60746/healthz ...
	I0601 12:00:14.735550   28829 api_server.go:266] https://127.0.0.1:60746/healthz returned 200:
	ok
	I0601 12:00:14.742074   28829 api_server.go:140] control plane version: v1.23.6
	I0601 12:00:14.742087   28829 api_server.go:130] duration metric: took 4.016491291s to wait for apiserver health ...
	I0601 12:00:14.742094   28829 cni.go:95] Creating CNI manager for ""
	I0601 12:00:14.742105   28829 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 12:00:14.742117   28829 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 12:00:14.749795   28829 system_pods.go:59] 8 kube-system pods found
	I0601 12:00:14.749812   28829 system_pods.go:61] "coredns-64897985d-hxbhf" [b1b3b467-12fe-4681-9a86-2855ba1e087a] Running
	I0601 12:00:14.749819   28829 system_pods.go:61] "etcd-embed-certs-20220601115855-16804" [9bdd83e2-edc8-4fd6-913e-c978b2a390a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0601 12:00:14.749823   28829 system_pods.go:61] "kube-apiserver-embed-certs-20220601115855-16804" [f01aa1c0-7c66-485f-8ae9-ea81ec72d61f] Running
	I0601 12:00:14.749830   28829 system_pods.go:61] "kube-controller-manager-embed-certs-20220601115855-16804" [4b44afb1-a477-4b52-af8c-9fbf9947dcc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0601 12:00:14.749836   28829 system_pods.go:61] "kube-proxy-hhbwv" [19408c1b-0db7-4ce4-bda8-b9ef78054eb5] Running
	I0601 12:00:14.749840   28829 system_pods.go:61] "kube-scheduler-embed-certs-20220601115855-16804" [1e8cf785-92e1-4068-add7-d217ee3fd625] Running
	I0601 12:00:14.749845   28829 system_pods.go:61] "metrics-server-b955d9d8-cv5b4" [8e155e5b-8d5c-4898-a95f-4d24d1c85714] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 12:00:14.749849   28829 system_pods.go:61] "storage-provisioner" [a3a21a47-4019-4f29-ac55-23ca85609de6] Running
	I0601 12:00:14.749853   28829 system_pods.go:74] duration metric: took 7.73298ms to wait for pod list to return data ...
	I0601 12:00:14.749859   28829 node_conditions.go:102] verifying NodePressure condition ...
	I0601 12:00:14.753342   28829 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 12:00:14.753360   28829 node_conditions.go:123] node cpu capacity is 6
	I0601 12:00:14.753372   28829 node_conditions.go:105] duration metric: took 3.509003ms to run NodePressure ...
	I0601 12:00:14.753387   28829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:00:14.902276   28829 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0601 12:00:14.908459   28829 kubeadm.go:777] kubelet initialised
	I0601 12:00:14.908471   28829 kubeadm.go:778] duration metric: took 6.181ms waiting for restarted kubelet to initialise ...
	I0601 12:00:14.908479   28829 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 12:00:14.914477   28829 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-hxbhf" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:14.919226   28829 pod_ready.go:92] pod "coredns-64897985d-hxbhf" in "kube-system" namespace has status "Ready":"True"
	I0601 12:00:14.919234   28829 pod_ready.go:81] duration metric: took 4.746053ms waiting for pod "coredns-64897985d-hxbhf" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:14.919239   28829 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:16.930345   28829 pod_ready.go:102] pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:18.930602   28829 pod_ready.go:102] pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:20.931370   28829 pod_ready.go:102] pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:23.429560   28829 pod_ready.go:102] pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:25.431054   28829 pod_ready.go:102] pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:27.431111   28829 pod_ready.go:102] pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:29.432632   28829 pod_ready.go:102] pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:29.930254   28829 pod_ready.go:92] pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:00:29.930266   28829 pod_ready.go:81] duration metric: took 15.011203247s waiting for pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:29.930272   28829 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:29.934493   28829 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:00:29.934501   28829 pod_ready.go:81] duration metric: took 4.223819ms waiting for pod "kube-apiserver-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:29.934506   28829 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:29.939831   28829 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:00:29.939839   28829 pod_ready.go:81] duration metric: took 5.322445ms waiting for pod "kube-controller-manager-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:29.939845   28829 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hhbwv" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:29.944936   28829 pod_ready.go:92] pod "kube-proxy-hhbwv" in "kube-system" namespace has status "Ready":"True"
	I0601 12:00:29.944945   28829 pod_ready.go:81] duration metric: took 5.09599ms waiting for pod "kube-proxy-hhbwv" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:29.944951   28829 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:29.950311   28829 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:00:29.950320   28829 pod_ready.go:81] duration metric: took 5.363535ms waiting for pod "kube-scheduler-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:29.950326   28829 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace to be "Ready" ...
	I0601 12:00:32.337276   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:34.338997   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:36.838194   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:39.339010   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:41.837043   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:43.839697   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:46.337938   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:48.338698   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:50.837208   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:53.336924   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:55.337759   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:57.837371   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:00:59.838487   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:02.338943   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:04.839121   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:07.336527   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:09.835809   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:11.837079   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:13.838677   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:16.336928   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:18.837052   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:20.838148   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:23.335490   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:25.336728   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:27.839348   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:30.337601   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:32.838908   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:35.337845   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:37.836046   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:39.836118   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:41.836308   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:43.838508   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:46.338445   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:48.838271   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:50.838560   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:53.335328   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:53.209412   28319 kubeadm.go:397] StartCluster complete in 7m58.682761983s
	I0601 12:01:53.209495   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0601 12:01:53.239013   28319 logs.go:274] 0 containers: []
	W0601 12:01:53.239025   28319 logs.go:276] No container was found matching "kube-apiserver"
	I0601 12:01:53.239081   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0601 12:01:53.268562   28319 logs.go:274] 0 containers: []
	W0601 12:01:53.268573   28319 logs.go:276] No container was found matching "etcd"
	I0601 12:01:53.268647   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0601 12:01:53.300274   28319 logs.go:274] 0 containers: []
	W0601 12:01:53.300286   28319 logs.go:276] No container was found matching "coredns"
	I0601 12:01:53.300359   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0601 12:01:53.329677   28319 logs.go:274] 0 containers: []
	W0601 12:01:53.329689   28319 logs.go:276] No container was found matching "kube-scheduler"
	I0601 12:01:53.329746   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0601 12:01:53.361469   28319 logs.go:274] 0 containers: []
	W0601 12:01:53.361481   28319 logs.go:276] No container was found matching "kube-proxy"
	I0601 12:01:53.361536   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0601 12:01:53.391374   28319 logs.go:274] 0 containers: []
	W0601 12:01:53.391386   28319 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0601 12:01:53.391442   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0601 12:01:53.419646   28319 logs.go:274] 0 containers: []
	W0601 12:01:53.419659   28319 logs.go:276] No container was found matching "storage-provisioner"
	I0601 12:01:53.419718   28319 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0601 12:01:53.450297   28319 logs.go:274] 0 containers: []
	W0601 12:01:53.450310   28319 logs.go:276] No container was found matching "kube-controller-manager"
	I0601 12:01:53.450317   28319 logs.go:123] Gathering logs for kubelet ...
	I0601 12:01:53.450324   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0601 12:01:53.493726   28319 logs.go:123] Gathering logs for dmesg ...
	I0601 12:01:53.493744   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0601 12:01:53.506201   28319 logs.go:123] Gathering logs for describe nodes ...
	I0601 12:01:53.506214   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0601 12:01:53.559752   28319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0601 12:01:53.559763   28319 logs.go:123] Gathering logs for Docker ...
	I0601 12:01:53.559771   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0601 12:01:53.572451   28319 logs.go:123] Gathering logs for container status ...
	I0601 12:01:53.572466   28319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0601 12:01:55.624682   28319 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052227376s)
	W0601 12:01:55.624796   28319 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0601 12:01:55.624810   28319 out.go:239] * 
	W0601 12:01:55.624940   28319 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0601 12:01:55.624954   28319 out.go:239] * 
	W0601 12:01:55.625525   28319 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 12:01:55.688737   28319 out.go:177] 
	W0601 12:01:55.731070   28319 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0601 12:01:55.731219   28319 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0601 12:01:55.731329   28319 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0601 12:01:55.794921   28319 out.go:177] 
	I0601 12:01:55.836076   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:01:58.336447   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:00.838455   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:03.335336   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:05.336325   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:07.838513   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:10.337754   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:12.838489   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:15.335382   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:17.837535   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:20.334412   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:22.334794   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:24.836980   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:26.837851   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:29.334703   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:31.836821   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:34.335821   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:36.355932   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:38.836152   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:41.338268   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:43.834144   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:45.838347   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:48.334485   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:50.335178   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:52.336039   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:54.835277   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:56.845623   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:02:59.335323   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:01.335991   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:03.835867   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:05.836222   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:08.336341   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:10.337348   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:12.837078   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:14.837161   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:17.337300   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:19.833964   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:21.834609   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:23.837358   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:26.335932   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:28.833759   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:30.836473   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:32.836486   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:35.337161   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:37.834111   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:39.834932   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:41.835885   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:44.334515   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:46.334562   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:48.836033   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:51.333781   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:53.336702   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:55.833470   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:57.836511   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:03:59.837021   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:04:02.335801   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:04:04.833600   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:04:06.837271   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:04:09.333347   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:04:11.334780   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:04:13.336669   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:04:15.834388   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:04:17.836752   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:04:20.336587   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:04:22.833053   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:04:24.835021   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:04:27.334160   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:04:29.834266   28829 pod_ready.go:102] pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace has status "Ready":"False"
	I0601 12:04:30.328306   28829 pod_ready.go:81] duration metric: took 4m0.380859693s waiting for pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace to be "Ready" ...
	E0601 12:04:30.328371   28829 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-cv5b4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0601 12:04:30.328384   28829 pod_ready.go:38] duration metric: took 4m15.422969154s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 12:04:30.328408   28829 kubeadm.go:630] restartCluster took 4m25.777145349s
	W0601 12:04:30.328486   28829 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0601 12:04:30.328501   28829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 12:05:08.804815   28829 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (38.476762521s)
	I0601 12:05:08.804876   28829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 12:05:08.815268   28829 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 12:05:08.823153   28829 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 12:05:08.823230   28829 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 12:05:08.830907   28829 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 12:05:08.830934   28829 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 12:05:09.330682   28829 out.go:204]   - Generating certificates and keys ...
	I0601 12:05:09.942397   28829 out.go:204]   - Booting up control plane ...
	I0601 12:05:16.496487   28829 out.go:204]   - Configuring RBAC rules ...
	I0601 12:05:16.872838   28829 cni.go:95] Creating CNI manager for ""
	I0601 12:05:16.872853   28829 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 12:05:16.872875   28829 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 12:05:16.872963   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=af273d6c1d2efba123f39c341ef4e1b2746b42f1 minikube.k8s.io/name=embed-certs-20220601115855-16804 minikube.k8s.io/updated_at=2022_06_01T12_05_16_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:16.872968   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:16.892760   28829 ops.go:34] apiserver oom_adj: -16
	I0601 12:05:17.095618   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:17.711843   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:18.212167   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:18.711996   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:19.211974   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:19.711974   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:20.211968   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:20.711877   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:21.211868   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:21.711939   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:22.211919   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:22.711862   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:23.211803   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:23.711916   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:24.211809   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:24.711811   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:25.211781   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:25.711871   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:26.211933   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:26.711893   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:27.211878   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:27.711875   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:28.211818   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:28.711786   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:29.211741   28829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:05:29.268731   28829 kubeadm.go:1045] duration metric: took 12.395967112s to wait for elevateKubeSystemPrivileges.
	I0601 12:05:29.268748   28829 kubeadm.go:397] StartCluster complete in 5m24.754388961s
	I0601 12:05:29.268775   28829 settings.go:142] acquiring lock: {Name:mk630944d7da2d6f5ad8bc7bd2a815aad6529f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 12:05:29.268868   28829 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 12:05:29.269672   28829 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk924f4ba24fa74a0cb052299e0cc4e825b209a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 12:05:29.783919   28829 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220601115855-16804" rescaled to 1
	I0601 12:05:29.783953   28829 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 12:05:29.783975   28829 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 12:05:29.783981   28829 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0601 12:05:29.804692   28829 addons.go:65] Setting dashboard=true in profile "embed-certs-20220601115855-16804"
	I0601 12:05:29.804694   28829 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220601115855-16804"
	I0601 12:05:29.804695   28829 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220601115855-16804"
	I0601 12:05:29.784099   28829 config.go:178] Loaded profile config "embed-certs-20220601115855-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 12:05:29.804706   28829 addons.go:153] Setting addon dashboard=true in "embed-certs-20220601115855-16804"
	I0601 12:05:29.804706   28829 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220601115855-16804"
	I0601 12:05:29.804711   28829 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220601115855-16804"
	I0601 12:05:29.804715   28829 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220601115855-16804"
	W0601 12:05:29.804714   28829 addons.go:165] addon dashboard should already be in state true
	W0601 12:05:29.804724   28829 addons.go:165] addon storage-provisioner should already be in state true
	I0601 12:05:29.804723   28829 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220601115855-16804"
	W0601 12:05:29.804734   28829 addons.go:165] addon metrics-server should already be in state true
	I0601 12:05:29.804610   28829 out.go:177] * Verifying Kubernetes components...
	I0601 12:05:29.804760   28829 host.go:66] Checking if "embed-certs-20220601115855-16804" exists ...
	I0601 12:05:29.804762   28829 host.go:66] Checking if "embed-certs-20220601115855-16804" exists ...
	I0601 12:05:29.804763   28829 host.go:66] Checking if "embed-certs-20220601115855-16804" exists ...
	I0601 12:05:29.862725   28829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 12:05:29.805011   28829 cli_runner.go:164] Run: docker container inspect embed-certs-20220601115855-16804 --format={{.State.Status}}
	I0601 12:05:29.805135   28829 cli_runner.go:164] Run: docker container inspect embed-certs-20220601115855-16804 --format={{.State.Status}}
	I0601 12:05:29.865559   28829 cli_runner.go:164] Run: docker container inspect embed-certs-20220601115855-16804 --format={{.State.Status}}
	I0601 12:05:29.866658   28829 cli_runner.go:164] Run: docker container inspect embed-certs-20220601115855-16804 --format={{.State.Status}}
	I0601 12:05:29.881842   28829 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 12:05:29.903600   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:05:29.996535   28829 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220601115855-16804"
	I0601 12:05:30.052547   28829 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 12:05:30.032695   28829 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	W0601 12:05:30.052583   28829 addons.go:165] addon default-storageclass should already be in state true
	I0601 12:05:30.073765   28829 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 12:05:30.111524   28829 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0601 12:05:30.111603   28829 host.go:66] Checking if "embed-certs-20220601115855-16804" exists ...
	I0601 12:05:30.128372   28829 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220601115855-16804" to be "Ready" ...
	I0601 12:05:30.148423   28829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 12:05:30.148483   28829 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 12:05:30.185592   28829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 12:05:30.149018   28829 cli_runner.go:164] Run: docker container inspect embed-certs-20220601115855-16804 --format={{.State.Status}}
	I0601 12:05:30.185649   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:05:30.185663   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:05:30.189784   28829 node_ready.go:49] node "embed-certs-20220601115855-16804" has status "Ready":"True"
	I0601 12:05:30.206670   28829 node_ready.go:38] duration metric: took 58.256695ms waiting for node "embed-certs-20220601115855-16804" to be "Ready" ...
	I0601 12:05:30.206553   28829 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 12:05:30.206695   28829 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 12:05:30.227763   28829 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 12:05:30.227781   28829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 12:05:30.227875   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:05:30.242759   28829 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-d4qsr" in "kube-system" namespace to be "Ready" ...
	I0601 12:05:30.287225   28829 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 12:05:30.287239   28829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 12:05:30.287310   28829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220601115855-16804
	I0601 12:05:30.323283   28829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60747 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601115855-16804/id_rsa Username:docker}
	I0601 12:05:30.327102   28829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60747 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601115855-16804/id_rsa Username:docker}
	I0601 12:05:30.342566   28829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60747 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601115855-16804/id_rsa Username:docker}
	I0601 12:05:30.383248   28829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60747 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/embed-certs-20220601115855-16804/id_rsa Username:docker}
	I0601 12:05:30.503711   28829 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 12:05:30.503751   28829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 12:05:30.516741   28829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 12:05:30.600152   28829 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 12:05:30.600169   28829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 12:05:30.602079   28829 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 12:05:30.602090   28829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 12:05:30.617625   28829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 12:05:30.692085   28829 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 12:05:30.692104   28829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 12:05:30.706290   28829 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 12:05:30.706317   28829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 12:05:30.798560   28829 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 12:05:30.798574   28829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 12:05:30.809813   28829 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 12:05:30.809843   28829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 12:05:30.887234   28829 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 12:05:30.887249   28829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 12:05:30.891885   28829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 12:05:30.910055   28829 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 12:05:30.910073   28829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 12:05:30.999281   28829 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 12:05:30.999339   28829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 12:05:31.087293   28829 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 12:05:31.087310   28829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 12:05:31.115353   28829 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 12:05:31.115368   28829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 12:05:31.198051   28829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 12:05:31.605430   28829 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.723571406s)
	I0601 12:05:31.605450   28829 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0601 12:05:31.988501   28829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.471752451s)
	I0601 12:05:31.988561   28829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.370919355s)
	I0601 12:05:32.095906   28829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.204010693s)
	I0601 12:05:32.095940   28829 addons.go:386] Verifying addon metrics-server=true in "embed-certs-20220601115855-16804"
	I0601 12:05:32.307621   28829 pod_ready.go:102] pod "coredns-64897985d-d4qsr" in "kube-system" namespace has status "Ready":"False"
	I0601 12:05:32.412245   28829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.214183058s)
	I0601 12:05:32.489169   28829 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0601 12:05:32.510111   28829 addons.go:417] enableAddons completed in 2.726142639s
	I0601 12:05:34.791205   28829 pod_ready.go:102] pod "coredns-64897985d-d4qsr" in "kube-system" namespace has status "Ready":"False"
	I0601 12:05:35.790741   28829 pod_ready.go:92] pod "coredns-64897985d-d4qsr" in "kube-system" namespace has status "Ready":"True"
	I0601 12:05:35.790756   28829 pod_ready.go:81] duration metric: took 5.548037364s waiting for pod "coredns-64897985d-d4qsr" in "kube-system" namespace to be "Ready" ...
	I0601 12:05:35.790764   28829 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-lflsj" in "kube-system" namespace to be "Ready" ...
	I0601 12:05:35.796990   28829 pod_ready.go:92] pod "coredns-64897985d-lflsj" in "kube-system" namespace has status "Ready":"True"
	I0601 12:05:35.797000   28829 pod_ready.go:81] duration metric: took 6.223912ms waiting for pod "coredns-64897985d-lflsj" in "kube-system" namespace to be "Ready" ...
	I0601 12:05:35.797007   28829 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:05:35.801815   28829 pod_ready.go:92] pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:05:35.801825   28829 pod_ready.go:81] duration metric: took 4.81318ms waiting for pod "etcd-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:05:35.801839   28829 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:05:35.806567   28829 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:05:35.806577   28829 pod_ready.go:81] duration metric: took 4.727671ms waiting for pod "kube-apiserver-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:05:35.806584   28829 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:05:35.812087   28829 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:05:35.812104   28829 pod_ready.go:81] duration metric: took 5.511915ms waiting for pod "kube-controller-manager-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:05:35.812121   28829 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8pt2q" in "kube-system" namespace to be "Ready" ...
	I0601 12:05:36.188181   28829 pod_ready.go:92] pod "kube-proxy-8pt2q" in "kube-system" namespace has status "Ready":"True"
	I0601 12:05:36.188191   28829 pod_ready.go:81] duration metric: took 376.062763ms waiting for pod "kube-proxy-8pt2q" in "kube-system" namespace to be "Ready" ...
	I0601 12:05:36.188198   28829 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:05:36.588612   28829 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220601115855-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:05:36.588642   28829 pod_ready.go:81] duration metric: took 400.444499ms waiting for pod "kube-scheduler-embed-certs-20220601115855-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:05:36.588648   28829 pod_ready.go:38] duration metric: took 6.36129347s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 12:05:36.588666   28829 api_server.go:51] waiting for apiserver process to appear ...
	I0601 12:05:36.588714   28829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:05:36.600807   28829 api_server.go:71] duration metric: took 6.816911881s to wait for apiserver process to appear ...
	I0601 12:05:36.600821   28829 api_server.go:87] waiting for apiserver healthz status ...
	I0601 12:05:36.600835   28829 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60746/healthz ...
	I0601 12:05:36.607455   28829 api_server.go:266] https://127.0.0.1:60746/healthz returned 200:
	ok
	I0601 12:05:36.608556   28829 api_server.go:140] control plane version: v1.23.6
	I0601 12:05:36.608565   28829 api_server.go:130] duration metric: took 7.7397ms to wait for apiserver health ...
	I0601 12:05:36.608570   28829 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 12:05:36.792965   28829 system_pods.go:59] 9 kube-system pods found
	I0601 12:05:36.792980   28829 system_pods.go:61] "coredns-64897985d-d4qsr" [85fd00ad-b978-455f-b46f-3abba8272140] Running
	I0601 12:05:36.792983   28829 system_pods.go:61] "coredns-64897985d-lflsj" [3cc3702d-8d03-4400-951b-ad67ad94d3dc] Running
	I0601 12:05:36.792988   28829 system_pods.go:61] "etcd-embed-certs-20220601115855-16804" [28835ec6-e6b7-44fc-b6d8-0c9828e4bbc5] Running
	I0601 12:05:36.792999   28829 system_pods.go:61] "kube-apiserver-embed-certs-20220601115855-16804" [0a9f4bf1-fba4-403b-a490-d07f9eb64a93] Running
	I0601 12:05:36.793003   28829 system_pods.go:61] "kube-controller-manager-embed-certs-20220601115855-16804" [0b5a87bc-f75b-4497-9c7c-74317a55b16e] Running
	I0601 12:05:36.793008   28829 system_pods.go:61] "kube-proxy-8pt2q" [fc613f9b-8ed7-4c30-8a2e-aef8e9c601cb] Running
	I0601 12:05:36.793013   28829 system_pods.go:61] "kube-scheduler-embed-certs-20220601115855-16804" [b3e9d427-fac9-4a1f-b158-d7b04cd8f4e4] Running
	I0601 12:05:36.793019   28829 system_pods.go:61] "metrics-server-b955d9d8-fnr2z" [aaaea80d-aee2-4f43-8ffe-e70aa5fe0b2f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 12:05:36.793024   28829 system_pods.go:61] "storage-provisioner" [f9e71c1d-677f-4843-9310-a42068423370] Running
	I0601 12:05:36.793028   28829 system_pods.go:74] duration metric: took 184.456154ms to wait for pod list to return data ...
	I0601 12:05:36.793034   28829 default_sa.go:34] waiting for default service account to be created ...
	I0601 12:05:36.988778   28829 default_sa.go:45] found service account: "default"
	I0601 12:05:36.988790   28829 default_sa.go:55] duration metric: took 195.754132ms for default service account to be created ...
	I0601 12:05:36.988796   28829 system_pods.go:116] waiting for k8s-apps to be running ...
	I0601 12:05:37.192496   28829 system_pods.go:86] 9 kube-system pods found
	I0601 12:05:37.192511   28829 system_pods.go:89] "coredns-64897985d-d4qsr" [85fd00ad-b978-455f-b46f-3abba8272140] Running
	I0601 12:05:37.192515   28829 system_pods.go:89] "coredns-64897985d-lflsj" [3cc3702d-8d03-4400-951b-ad67ad94d3dc] Running
	I0601 12:05:37.192519   28829 system_pods.go:89] "etcd-embed-certs-20220601115855-16804" [28835ec6-e6b7-44fc-b6d8-0c9828e4bbc5] Running
	I0601 12:05:37.192530   28829 system_pods.go:89] "kube-apiserver-embed-certs-20220601115855-16804" [0a9f4bf1-fba4-403b-a490-d07f9eb64a93] Running
	I0601 12:05:37.192535   28829 system_pods.go:89] "kube-controller-manager-embed-certs-20220601115855-16804" [0b5a87bc-f75b-4497-9c7c-74317a55b16e] Running
	I0601 12:05:37.192538   28829 system_pods.go:89] "kube-proxy-8pt2q" [fc613f9b-8ed7-4c30-8a2e-aef8e9c601cb] Running
	I0601 12:05:37.192543   28829 system_pods.go:89] "kube-scheduler-embed-certs-20220601115855-16804" [b3e9d427-fac9-4a1f-b158-d7b04cd8f4e4] Running
	I0601 12:05:37.192550   28829 system_pods.go:89] "metrics-server-b955d9d8-fnr2z" [aaaea80d-aee2-4f43-8ffe-e70aa5fe0b2f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 12:05:37.192554   28829 system_pods.go:89] "storage-provisioner" [f9e71c1d-677f-4843-9310-a42068423370] Running
	I0601 12:05:37.192559   28829 system_pods.go:126] duration metric: took 203.762371ms to wait for k8s-apps to be running ...
	I0601 12:05:37.192567   28829 system_svc.go:44] waiting for kubelet service to be running ....
	I0601 12:05:37.192623   28829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 12:05:37.208966   28829 system_svc.go:56] duration metric: took 16.397228ms WaitForService to wait for kubelet.
	I0601 12:05:37.208983   28829 kubeadm.go:572] duration metric: took 7.425101117s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0601 12:05:37.209003   28829 node_conditions.go:102] verifying NodePressure condition ...
	I0601 12:05:37.397658   28829 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 12:05:37.397674   28829 node_conditions.go:123] node cpu capacity is 6
	I0601 12:05:37.397686   28829 node_conditions.go:105] duration metric: took 188.678974ms to run NodePressure ...
	I0601 12:05:37.397695   28829 start.go:213] waiting for startup goroutines ...
	I0601 12:05:37.436249   28829 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0601 12:05:37.458438   28829 out.go:177] * Done! kubectl is now configured to use "embed-certs-20220601115855-16804" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-01 19:00:01 UTC, end at Wed 2022-06-01 19:06:33 UTC. --
	Jun 01 19:04:57 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:04:57.174821542Z" level=info msg="ignoring event" container=0aad5ca0a394cf6461c23f14fe591e66b89a7e79a4b465ab0eeb1e5f3efa0898 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:05:07 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:07.243462171Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=c63a9420de02a1022c759b4caf98b142bbfb581f986dde7e4ac807c9aaaa4403
	Jun 01 19:05:07 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:07.298602462Z" level=info msg="ignoring event" container=c63a9420de02a1022c759b4caf98b142bbfb581f986dde7e4ac807c9aaaa4403 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:05:07 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:07.424678894Z" level=info msg="ignoring event" container=1b220fad43e1f5d85ff364980ed68edc766d714aeebc4982e79078ca74493ee8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:05:07 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:07.527325630Z" level=info msg="ignoring event" container=e3f4c96e3f9800c226e9f9aa6c062d24993f65c6c3f4272d8aa2276514c463fc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:05:07 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:07.628775935Z" level=info msg="ignoring event" container=b739864d288ecbde8b0b1e93246a6d20856b0ac9233472ec6bf4de3ef3b43e33 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:05:07 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:07.734389527Z" level=info msg="ignoring event" container=0110a1f81dcf8a5b18e4e20a574cc456a211c4ea74695c5236b0b3d3b4e3913a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:05:07 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:07.846017552Z" level=info msg="ignoring event" container=4af3a998b43b14c55b04591304de664d354309a29431e35de379467fd5eab9b6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:05:32 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:32.692857665Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 19:05:32 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:32.692938410Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 19:05:32 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:32.695357867Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 19:05:33 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:33.327247619Z" level=warning msg="reference for unknown type: " digest="sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2" remote="docker.io/kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2"
	Jun 01 19:05:38 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:38.088611457Z" level=info msg="ignoring event" container=56c8192225b0e9ced84e900e9c502343adf80b27c63d949c62dc5a73f1abf747 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:05:38 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:38.343116149Z" level=info msg="ignoring event" container=5932bcb6e9cf366b8149d7374f2be62bfcd0c9eb75134b18788050006d423ecf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:05:38 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:38.866356767Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jun 01 19:05:39 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:39.092699177Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jun 01 19:05:42 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:42.443264471Z" level=info msg="ignoring event" container=e850ccaa1099d442cbfe06579668f7fc245f3a13b858c88427d8a04116d6a64e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:05:43 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:43.262087068Z" level=info msg="ignoring event" container=597493054b80ac3015a530b4cdc9e41f26c7d37ad16eec9806b85bd595acf49d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:05:46 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:46.194170582Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 19:05:46 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:46.194210761Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 19:05:46 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:05:46.195628514Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 19:06:30 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:06:30.543462213Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 19:06:30 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:06:30.543493558Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 19:06:30 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:06:30.545515073Z" level=info msg="ignoring event" container=38eed61c254eed2d25d1ff914a9f96fa7ee7046ee5a42cb1fb735b2c6ca10832 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:06:30 embed-certs-20220601115855-16804 dockerd[130]: time="2022-06-01T19:06:30.545564820Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	38eed61c254ee       a90209bb39e3d                                                                                    3 seconds ago        Exited              dashboard-metrics-scraper   2                   660c0600baea9
	40a3e61ea1d7c       kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2   55 seconds ago       Running             kubernetes-dashboard        0                   850a668ad00aa
	415c0adaff0b7       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   b7310893337b6
	5c7406ea20fc3       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   f9cae47eb2f6d
	218d974cd4bd7       4c03754524064                                                                                    About a minute ago   Running             kube-proxy                  0                   8a094407f9052
	8eccce42310b9       595f327f224a4                                                                                    About a minute ago   Running             kube-scheduler              2                   37c3ce776f22e
	001882f735bb7       8fa62c12256df                                                                                    About a minute ago   Running             kube-apiserver              2                   9ff8f749dc98c
	6d8d06f0c3f76       df7b72818ad2e                                                                                    About a minute ago   Running             kube-controller-manager     2                   f34e81bac3033
	0c9e2554bdfe8       25f8c7f3da61c                                                                                    About a minute ago   Running             etcd                        2                   73b9ea2bded5e
	
	* 
	* ==> coredns [5c7406ea20fc] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220601115855-16804
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220601115855-16804
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af273d6c1d2efba123f39c341ef4e1b2746b42f1
	                    minikube.k8s.io/name=embed-certs-20220601115855-16804
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T12_05_16_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 19:05:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220601115855-16804
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Jun 2022 19:06:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 19:06:26 +0000   Wed, 01 Jun 2022 19:05:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 19:06:26 +0000   Wed, 01 Jun 2022 19:05:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 19:06:26 +0000   Wed, 01 Jun 2022 19:05:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Jun 2022 19:06:26 +0000   Wed, 01 Jun 2022 19:06:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    embed-certs-20220601115855-16804
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	  System UUID:                9dffb622-e66d-49af-bc81-c172407d2bbc
	  Boot ID:                    60fb2c64-72ec-41ec-9cdf-c18d3fde7c60
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-lflsj                                     100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     65s
	  kube-system                 etcd-embed-certs-20220601115855-16804                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         77s
	  kube-system                 kube-apiserver-embed-certs-20220601115855-16804             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-controller-manager-embed-certs-20220601115855-16804    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-proxy-8pt2q                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kube-scheduler-embed-certs-20220601115855-16804             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 metrics-server-b955d9d8-fnr2z                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         63s
	  kube-system                 storage-provisioner                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-8d2ch                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kubernetes-dashboard        kubernetes-dashboard-8469778f77-n4ksx                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 63s   kube-proxy  
	  Normal  Starting                 78s   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  77s   kubelet     Node embed-certs-20220601115855-16804 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s   kubelet     Node embed-certs-20220601115855-16804 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s   kubelet     Node embed-certs-20220601115855-16804 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  77s   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                67s   kubelet     Node embed-certs-20220601115855-16804 status is now: NodeReady
	  Normal  Starting                 8s    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s    kubelet     Node embed-certs-20220601115855-16804 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s    kubelet     Node embed-certs-20220601115855-16804 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s    kubelet     Node embed-certs-20220601115855-16804 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             8s    kubelet     Node embed-certs-20220601115855-16804 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  8s    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8s    kubelet     Node embed-certs-20220601115855-16804 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [0c9e2554bdfe] <==
	* {"level":"info","ts":"2022-06-01T19:05:12.040Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2022-06-01T19:05:12.040Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2022-06-01T19:05:12.040Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-01T19:05:12.040Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2022-06-01T19:05:12.040Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-01T19:05:12.041Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:embed-certs-20220601115855-16804 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T19:05:12.041Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T19:05:12.041Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-01T19:05:12.042Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T19:05:12.042Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T19:05:12.043Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-06-01T19:05:12.046Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T19:05:12.046Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T19:05:12.046Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T19:05:12.054Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T19:05:12.054Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T19:05:30.133Z","caller":"traceutil/trace.go:171","msg":"trace[362881679] linearizableReadLoop","detail":"{readStateIndex:424; appliedIndex:424; }","duration":"105.350579ms","start":"2022-06-01T19:05:30.028Z","end":"2022-06-01T19:05:30.133Z","steps":["trace[362881679] 'read index received'  (duration: 105.330174ms)","trace[362881679] 'applied index is now lower than readState.Index'  (duration: 19.651µs)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T19:05:30.208Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"179.66537ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/kube-system/kube-dns-gdqt6\" ","response":"range_response_count:1 size:912"}
	{"level":"info","ts":"2022-06-01T19:05:30.208Z","caller":"traceutil/trace.go:171","msg":"trace[2006057421] range","detail":"{range_begin:/registry/endpointslices/kube-system/kube-dns-gdqt6; range_end:; response_count:1; response_revision:413; }","duration":"179.722305ms","start":"2022-06-01T19:05:30.028Z","end":"2022-06-01T19:05:30.208Z","steps":["trace[2006057421] 'agreement among raft nodes before linearized reading'  (duration: 105.485103ms)","trace[2006057421] 'range keys from in-memory index tree'  (duration: 74.1599ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-01T19:05:30.208Z","caller":"traceutil/trace.go:171","msg":"trace[487800730] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"179.715675ms","start":"2022-06-01T19:05:30.028Z","end":"2022-06-01T19:05:30.208Z","steps":["trace[487800730] 'process raft request'  (duration: 105.305107ms)","trace[487800730] 'compare'  (duration: 73.978449ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-01T19:05:30.208Z","caller":"traceutil/trace.go:171","msg":"trace[1425637086] transaction","detail":"{read_only:false; response_revision:415; number_of_response:1; }","duration":"178.192897ms","start":"2022-06-01T19:05:30.030Z","end":"2022-06-01T19:05:30.208Z","steps":["trace[1425637086] 'process raft request'  (duration: 177.993948ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-01T19:05:30.208Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"102.600148ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-64897985d-d4qsr\" ","response":"range_response_count:1 size:3462"}
	{"level":"info","ts":"2022-06-01T19:05:30.208Z","caller":"traceutil/trace.go:171","msg":"trace[834637519] range","detail":"{range_begin:/registry/pods/kube-system/coredns-64897985d-d4qsr; range_end:; response_count:1; response_revision:415; }","duration":"102.617859ms","start":"2022-06-01T19:05:30.105Z","end":"2022-06-01T19:05:30.208Z","steps":["trace[834637519] 'agreement among raft nodes before linearized reading'  (duration: 102.546921ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-01T19:05:30.208Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"177.630926ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/coredns-64897985d\" ","response":"range_response_count:1 size:3511"}
	{"level":"info","ts":"2022-06-01T19:05:30.208Z","caller":"traceutil/trace.go:171","msg":"trace[2014257552] range","detail":"{range_begin:/registry/replicasets/kube-system/coredns-64897985d; range_end:; response_count:1; response_revision:415; }","duration":"177.657381ms","start":"2022-06-01T19:05:30.030Z","end":"2022-06-01T19:05:30.208Z","steps":["trace[2014257552] 'agreement among raft nodes before linearized reading'  (duration: 177.63083ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  19:06:34 up  1:09,  0 users,  load average: 0.30, 0.40, 0.73
	Linux embed-certs-20220601115855-16804 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [001882f735bb] <==
	* I0601 19:05:15.256114       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0601 19:05:15.327099       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0601 19:05:15.331083       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0601 19:05:15.331997       1 controller.go:611] quota admission added evaluator for: endpoints
	I0601 19:05:15.334771       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0601 19:05:16.085091       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0601 19:05:16.718070       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0601 19:05:16.724833       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0601 19:05:16.737965       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0601 19:05:16.923379       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0601 19:05:29.193087       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0601 19:05:29.842617       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0601 19:05:30.893131       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0601 19:05:32.085311       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.102.54.92]
	E0601 19:05:32.118001       1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I0601 19:05:32.386482       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.100.95.217]
	I0601 19:05:32.399885       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.110.88.214]
	W0601 19:05:32.716937       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 19:05:32.717080       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 19:05:32.717104       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0601 19:06:32.673417       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 19:06:32.673652       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 19:06:32.673702       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [6d8d06f0c3f7] <==
	* I0601 19:05:30.304238       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-d4qsr"
	I0601 19:05:31.709586       1 event.go:294] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-b955d9d8 to 1"
	I0601 19:05:31.721993       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-b955d9d8-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0601 19:05:31.794640       1 replica_set.go:536] sync "kube-system/metrics-server-b955d9d8" failed with pods "metrics-server-b955d9d8-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0601 19:05:31.802307       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-b955d9d8-fnr2z"
	I0601 19:05:32.013092       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-56974995fc to 1"
	I0601 19:05:32.022400       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 19:05:32.034590       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 19:05:32.035398       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8469778f77 to 1"
	I0601 19:05:32.089045       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 19:05:32.089094       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 19:05:32.095238       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0601 19:05:32.099710       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 19:05:32.103695       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 19:05:32.103866       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0601 19:05:32.117161       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0601 19:05:32.117274       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 19:05:32.117298       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 19:05:32.117309       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 19:05:32.124064       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 19:05:32.124368       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 19:05:32.189822       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-n4ksx"
	I0601 19:05:32.194432       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-8d2ch"
	E0601 19:06:26.172673       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 19:06:26.179913       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [218d974cd4bd] <==
	* I0601 19:05:30.705550       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0601 19:05:30.705611       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0601 19:05:30.705631       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 19:05:30.889606       1 server_others.go:206] "Using iptables Proxier"
	I0601 19:05:30.889654       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 19:05:30.889662       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 19:05:30.889694       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 19:05:30.890039       1 server.go:656] "Version info" version="v1.23.6"
	I0601 19:05:30.890692       1 config.go:226] "Starting endpoint slice config controller"
	I0601 19:05:30.890709       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 19:05:30.890747       1 config.go:317] "Starting service config controller"
	I0601 19:05:30.890750       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 19:05:30.990890       1 shared_informer.go:247] Caches are synced for service config 
	I0601 19:05:30.990922       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [8eccce42310b] <==
	* W0601 19:05:14.026546       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 19:05:14.026680       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0601 19:05:14.027375       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 19:05:14.027389       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0601 19:05:14.027458       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0601 19:05:14.027467       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0601 19:05:14.027953       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 19:05:14.028008       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0601 19:05:14.028442       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 19:05:14.028592       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 19:05:14.028620       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 19:05:14.028633       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0601 19:05:14.028810       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0601 19:05:14.028904       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0601 19:05:14.028755       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0601 19:05:14.029083       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0601 19:05:14.861883       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 19:05:14.861937       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0601 19:05:14.913852       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0601 19:05:14.913920       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0601 19:05:14.992863       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0601 19:05:14.992899       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0601 19:05:15.092244       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 19:05:15.092287       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0601 19:05:17.117866       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 19:00:01 UTC, end at Wed 2022-06-01 19:06:35 UTC. --
	Jun 01 19:06:27 embed-certs-20220601115855-16804 kubelet[7144]: I0601 19:06:27.771539    7144 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fc613f9b-8ed7-4c30-8a2e-aef8e9c601cb-kube-proxy\") pod \"kube-proxy-8pt2q\" (UID: \"fc613f9b-8ed7-4c30-8a2e-aef8e9c601cb\") " pod="kube-system/kube-proxy-8pt2q"
	Jun 01 19:06:27 embed-certs-20220601115855-16804 kubelet[7144]: I0601 19:06:27.771556    7144 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc613f9b-8ed7-4c30-8a2e-aef8e9c601cb-xtables-lock\") pod \"kube-proxy-8pt2q\" (UID: \"fc613f9b-8ed7-4c30-8a2e-aef8e9c601cb\") " pod="kube-system/kube-proxy-8pt2q"
	Jun 01 19:06:27 embed-certs-20220601115855-16804 kubelet[7144]: I0601 19:06:27.771595    7144 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3cc3702d-8d03-4400-951b-ad67ad94d3dc-config-volume\") pod \"coredns-64897985d-lflsj\" (UID: \"3cc3702d-8d03-4400-951b-ad67ad94d3dc\") " pod="kube-system/coredns-64897985d-lflsj"
	Jun 01 19:06:27 embed-certs-20220601115855-16804 kubelet[7144]: I0601 19:06:27.771642    7144 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k844p\" (UniqueName: \"kubernetes.io/projected/3cc3702d-8d03-4400-951b-ad67ad94d3dc-kube-api-access-k844p\") pod \"coredns-64897985d-lflsj\" (UID: \"3cc3702d-8d03-4400-951b-ad67ad94d3dc\") " pod="kube-system/coredns-64897985d-lflsj"
	Jun 01 19:06:27 embed-certs-20220601115855-16804 kubelet[7144]: I0601 19:06:27.771703    7144 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6b9cf5f5-152e-49cb-9646-876836323cd4-tmp-volume\") pod \"kubernetes-dashboard-8469778f77-n4ksx\" (UID: \"6b9cf5f5-152e-49cb-9646-876836323cd4\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-n4ksx"
	Jun 01 19:06:27 embed-certs-20220601115855-16804 kubelet[7144]: I0601 19:06:27.771839    7144 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wstfn\" (UniqueName: \"kubernetes.io/projected/6b9cf5f5-152e-49cb-9646-876836323cd4-kube-api-access-wstfn\") pod \"kubernetes-dashboard-8469778f77-n4ksx\" (UID: \"6b9cf5f5-152e-49cb-9646-876836323cd4\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-n4ksx"
	Jun 01 19:06:27 embed-certs-20220601115855-16804 kubelet[7144]: I0601 19:06:27.771897    7144 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/aaaea80d-aee2-4f43-8ffe-e70aa5fe0b2f-tmp-dir\") pod \"metrics-server-b955d9d8-fnr2z\" (UID: \"aaaea80d-aee2-4f43-8ffe-e70aa5fe0b2f\") " pod="kube-system/metrics-server-b955d9d8-fnr2z"
	Jun 01 19:06:27 embed-certs-20220601115855-16804 kubelet[7144]: I0601 19:06:27.771923    7144 reconciler.go:157] "Reconciler: start to sync state"
	Jun 01 19:06:28 embed-certs-20220601115855-16804 kubelet[7144]: I0601 19:06:28.946559    7144 request.go:665] Waited for 1.138589098s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jun 01 19:06:28 embed-certs-20220601115855-16804 kubelet[7144]: E0601 19:06:28.952182    7144 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-embed-certs-20220601115855-16804\" already exists" pod="kube-system/kube-controller-manager-embed-certs-20220601115855-16804"
	Jun 01 19:06:29 embed-certs-20220601115855-16804 kubelet[7144]: E0601 19:06:29.179916    7144 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-scheduler-embed-certs-20220601115855-16804\" already exists" pod="kube-system/kube-scheduler-embed-certs-20220601115855-16804"
	Jun 01 19:06:29 embed-certs-20220601115855-16804 kubelet[7144]: E0601 19:06:29.351888    7144 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"etcd-embed-certs-20220601115855-16804\" already exists" pod="kube-system/etcd-embed-certs-20220601115855-16804"
	Jun 01 19:06:29 embed-certs-20220601115855-16804 kubelet[7144]: E0601 19:06:29.619302    7144 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-apiserver-embed-certs-20220601115855-16804\" already exists" pod="kube-system/kube-apiserver-embed-certs-20220601115855-16804"
	Jun 01 19:06:30 embed-certs-20220601115855-16804 kubelet[7144]: I0601 19:06:30.151536    7144 scope.go:110] "RemoveContainer" containerID="597493054b80ac3015a530b4cdc9e41f26c7d37ad16eec9806b85bd595acf49d"
	Jun 01 19:06:30 embed-certs-20220601115855-16804 kubelet[7144]: E0601 19:06:30.546224    7144 remote_image.go:216] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 01 19:06:30 embed-certs-20220601115855-16804 kubelet[7144]: E0601 19:06:30.546289    7144 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 01 19:06:30 embed-certs-20220601115855-16804 kubelet[7144]: E0601 19:06:30.546411    7144 kuberuntime_manager.go:919] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8nvr9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeH
andler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMes
sagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-b955d9d8-fnr2z_kube-system(aaaea80d-aee2-4f43-8ffe-e70aa5fe0b2f): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jun 01 19:06:30 embed-certs-20220601115855-16804 kubelet[7144]: E0601 19:06:30.546544    7144 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-b955d9d8-fnr2z" podUID=aaaea80d-aee2-4f43-8ffe-e70aa5fe0b2f
	Jun 01 19:06:30 embed-certs-20220601115855-16804 kubelet[7144]: I0601 19:06:30.802501    7144 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-8d2ch through plugin: invalid network status for"
	Jun 01 19:06:30 embed-certs-20220601115855-16804 kubelet[7144]: I0601 19:06:30.807247    7144 scope.go:110] "RemoveContainer" containerID="597493054b80ac3015a530b4cdc9e41f26c7d37ad16eec9806b85bd595acf49d"
	Jun 01 19:06:30 embed-certs-20220601115855-16804 kubelet[7144]: I0601 19:06:30.807474    7144 scope.go:110] "RemoveContainer" containerID="38eed61c254eed2d25d1ff914a9f96fa7ee7046ee5a42cb1fb735b2c6ca10832"
	Jun 01 19:06:30 embed-certs-20220601115855-16804 kubelet[7144]: E0601 19:06:30.807938    7144 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-56974995fc-8d2ch_kubernetes-dashboard(26ee83bc-3db8-4f98-8930-1dbcf4691729)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-8d2ch" podUID=26ee83bc-3db8-4f98-8930-1dbcf4691729
	Jun 01 19:06:31 embed-certs-20220601115855-16804 kubelet[7144]: I0601 19:06:31.814573    7144 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-8d2ch through plugin: invalid network status for"
	Jun 01 19:06:31 embed-certs-20220601115855-16804 kubelet[7144]: I0601 19:06:31.817486    7144 scope.go:110] "RemoveContainer" containerID="38eed61c254eed2d25d1ff914a9f96fa7ee7046ee5a42cb1fb735b2c6ca10832"
	Jun 01 19:06:31 embed-certs-20220601115855-16804 kubelet[7144]: E0601 19:06:31.817724    7144 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-56974995fc-8d2ch_kubernetes-dashboard(26ee83bc-3db8-4f98-8930-1dbcf4691729)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-8d2ch" podUID=26ee83bc-3db8-4f98-8930-1dbcf4691729
	
	* 
	* ==> kubernetes-dashboard [40a3e61ea1d7] <==
	* 2022/06/01 19:05:38 Using namespace: kubernetes-dashboard
	2022/06/01 19:05:38 Using in-cluster config to connect to apiserver
	2022/06/01 19:05:38 Using secret token for csrf signing
	2022/06/01 19:05:38 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/06/01 19:05:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/06/01 19:05:38 Successful initial request to the apiserver, version: v1.23.6
	2022/06/01 19:05:38 Generating JWE encryption key
	2022/06/01 19:05:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/06/01 19:05:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/06/01 19:05:38 Initializing JWE encryption key from synchronized object
	2022/06/01 19:05:38 Creating in-cluster Sidecar client
	2022/06/01 19:05:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/01 19:05:38 Serving insecurely on HTTP port: 9090
	2022/06/01 19:06:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/01 19:05:38 Starting overwatch
	
	* 
	* ==> storage-provisioner [415c0adaff0b] <==
	* I0601 19:05:32.666541       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0601 19:05:32.676286       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0601 19:05:32.676372       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0601 19:05:32.686506       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0601 19:05:32.686726       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20220601115855-16804_6ba63202-3d2b-48a2-ab36-0a30f813ae17!
	I0601 19:05:32.686928       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"53ecb739-2455-4707-9163-e52e96cbeefc", APIVersion:"v1", ResourceVersion:"556", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20220601115855-16804_6ba63202-3d2b-48a2-ab36-0a30f813ae17 became leader
	I0601 19:05:32.787748       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20220601115855-16804_6ba63202-3d2b-48a2-ab36-0a30f813ae17!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220601115855-16804 -n embed-certs-20220601115855-16804
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220601115855-16804 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-b955d9d8-fnr2z
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220601115855-16804 describe pod metrics-server-b955d9d8-fnr2z
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220601115855-16804 describe pod metrics-server-b955d9d8-fnr2z: exit status 1 (298.962386ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-b955d9d8-fnr2z" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220601115855-16804 describe pod metrics-server-b955d9d8-fnr2z: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (44.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (555.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:11:49.648460   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601115057-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:12:07.478260   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601113004-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:12:54.134038   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601113004-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:13:03.742003   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601113006-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:13:14.590704   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:14:07.878793   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:14:32.641161   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601113006-16804/client.crt: no such file or directory
E0601 12:14:37.658638   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
E0601 12:14:38.190821   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601112852-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:16:22.100021   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601113004-16804/client.crt: no such file or directory
E0601 12:16:23.705499   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601113005-16804/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:16:49.647447   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601115057-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:17:23.801492   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/client.crt: no such file or directory
E0601 12:17:23.807374   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/client.crt: no such file or directory
E0601 12:17:23.819603   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/client.crt: no such file or directory
E0601 12:17:23.839759   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/client.crt: no such file or directory
E0601 12:17:23.880751   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/client.crt: no such file or directory
E0601 12:17:23.960982   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/client.crt: no such file or directory
E0601 12:17:24.123133   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/client.crt: no such file or directory
E0601 12:17:24.443725   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/client.crt: no such file or directory
E0601 12:17:25.084419   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/client.crt: no such file or directory
E0601 12:17:26.365191   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:17:28.927487   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/client.crt: no such file or directory
E0601 12:17:34.049745   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:17:44.290518   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:17:54.133203   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601113004-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59946/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0601 12:18:03.739201   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601113006-16804/client.crt: no such file or directory
E0601 12:18:04.770759   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 12:18:12.701743   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601115057-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 12:18:14.588220   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 12:18:45.730813   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 12:19:07.876844   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 12:19:26.755941   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601113005-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 12:19:32.638268   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601113006-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 12:19:38.188507   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601112852-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 12:20:07.652468   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 12:20:20.700616   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601113004-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0601 12:20:44.418595   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601113004-16804/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:289: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220601114806-16804 -n old-k8s-version-20220601114806-16804
start_stop_delete_test.go:289: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220601114806-16804 -n old-k8s-version-20220601114806-16804: exit status 2 (429.727867ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:289: status error: exit status 2 (may be ok)
start_stop_delete_test.go:289: "old-k8s-version-20220601114806-16804" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:290: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context old-k8s-version-20220601114806-16804 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:293: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220601114806-16804 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.106µs)
start_stop_delete_test.go:295: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-20220601114806-16804 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:299: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220601114806-16804
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220601114806-16804:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273",
	        "Created": "2022-06-01T18:48:12.461821519Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 212829,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T18:53:51.165763227Z",
	            "FinishedAt": "2022-06-01T18:53:48.32715559Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273/hostname",
	        "HostsPath": "/var/lib/docker/containers/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273/hosts",
	        "LogPath": "/var/lib/docker/containers/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273/ff69f8f777d8ccd5c9335ccd6124137e6d0ba65dcdd0352da621f3d0a19da273-json.log",
	        "Name": "/old-k8s-version-20220601114806-16804",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220601114806-16804:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220601114806-16804",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/34025968d17a5ea4a956d84b5a5a083525af3a67c56680691bf072548c5ecfc2-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/34025968d17a5ea4a956d84b5a5a083525af3a67c56680691bf072548c5ecfc2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/34025968d17a5ea4a956d84b5a5a083525af3a67c56680691bf072548c5ecfc2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/34025968d17a5ea4a956d84b5a5a083525af3a67c56680691bf072548c5ecfc2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220601114806-16804",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220601114806-16804/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220601114806-16804",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220601114806-16804",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220601114806-16804",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "df15676c71a0eb8c1755841478abd978fa8d8f53d24ceed344774583d711d893",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59947"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59948"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59944"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59945"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59946"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/df15676c71a0",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220601114806-16804": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ff69f8f777d8",
	                        "old-k8s-version-20220601114806-16804"
	                    ],
	                    "NetworkID": "246cf6a028e4e11a14e92d87f31441d673c4de3a42936ed926f0c32bee110562",
	                    "EndpointID": "248cec2b4960c9be6d236f5305db55c60b48dd57301f892e0015a2ab70c18ccf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220601114806-16804 -n old-k8s-version-20220601114806-16804
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220601114806-16804 -n old-k8s-version-20220601114806-16804: exit status 2 (435.652333ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-20220601114806-16804 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-20220601114806-16804 logs -n 25: (3.558510083s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p                                                         | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:07 PDT |
	|         | default-k8s-different-port-20220601120641-16804            |                                                 |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                 |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:07 PDT | 01 Jun 22 12:07 PDT |
	|         | default-k8s-different-port-20220601120641-16804            |                                                 |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |                |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:07 PDT | 01 Jun 22 12:07 PDT |
	|         | default-k8s-different-port-20220601120641-16804            |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:07 PDT | 01 Jun 22 12:07 PDT |
	|         | default-k8s-different-port-20220601120641-16804            |                                                 |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |                |                     |                     |
	| logs    | old-k8s-version-20220601114806-16804                       | old-k8s-version-20220601114806-16804            | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:11 PDT | 01 Jun 22 12:11 PDT |
	|         | logs -n 25                                                 |                                                 |         |                |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:07 PDT | 01 Jun 22 12:13 PDT |
	|         | default-k8s-different-port-20220601120641-16804            |                                                 |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                 |         |                |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:13 PDT | 01 Jun 22 12:13 PDT |
	|         | default-k8s-different-port-20220601120641-16804            |                                                 |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |                |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:13 PDT | 01 Jun 22 12:13 PDT |
	|         | default-k8s-different-port-20220601120641-16804            |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |                |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:14 PDT | 01 Jun 22 12:14 PDT |
	|         | default-k8s-different-port-20220601120641-16804            |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601120641-16804            | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:14 PDT | 01 Jun 22 12:14 PDT |
	|         | logs -n 25                                                 |                                                 |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601120641-16804            | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:14 PDT | 01 Jun 22 12:14 PDT |
	|         | logs -n 25                                                 |                                                 |         |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:14 PDT | 01 Jun 22 12:14 PDT |
	|         | default-k8s-different-port-20220601120641-16804            |                                                 |         |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:14 PDT | 01 Jun 22 12:14 PDT |
	|         | default-k8s-different-port-20220601120641-16804            |                                                 |         |                |                     |                     |
	| start   | -p newest-cni-20220601121425-16804 --memory=2200           | newest-cni-20220601121425-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:14 PDT | 01 Jun 22 12:15 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |                |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.23.6              |                                                 |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220601121425-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:15 PDT | 01 Jun 22 12:15 PDT |
	|         | newest-cni-20220601121425-16804                            |                                                 |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220601121425-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:15 PDT | 01 Jun 22 12:15 PDT |
	|         | newest-cni-20220601121425-16804                            |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220601121425-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:15 PDT | 01 Jun 22 12:15 PDT |
	|         | newest-cni-20220601121425-16804                            |                                                 |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |                |                     |                     |
	| start   | -p newest-cni-20220601121425-16804 --memory=2200           | newest-cni-20220601121425-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:15 PDT | 01 Jun 22 12:15 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |                |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.23.6              |                                                 |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220601121425-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:15 PDT | 01 Jun 22 12:15 PDT |
	|         | newest-cni-20220601121425-16804                            |                                                 |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |                |                     |                     |
	| pause   | -p                                                         | newest-cni-20220601121425-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:15 PDT | 01 Jun 22 12:15 PDT |
	|         | newest-cni-20220601121425-16804                            |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |                |                     |                     |
	| unpause | -p                                                         | newest-cni-20220601121425-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:16 PDT | 01 Jun 22 12:16 PDT |
	|         | newest-cni-20220601121425-16804                            |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |                |                     |                     |
	| logs    | newest-cni-20220601121425-16804                            | newest-cni-20220601121425-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:16 PDT | 01 Jun 22 12:16 PDT |
	|         | logs -n 25                                                 |                                                 |         |                |                     |                     |
	| logs    | newest-cni-20220601121425-16804                            | newest-cni-20220601121425-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:16 PDT | 01 Jun 22 12:16 PDT |
	|         | logs -n 25                                                 |                                                 |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220601121425-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:16 PDT | 01 Jun 22 12:16 PDT |
	|         | newest-cni-20220601121425-16804                            |                                                 |         |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220601121425-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:16 PDT | 01 Jun 22 12:16 PDT |
	|         | newest-cni-20220601121425-16804                            |                                                 |         |                |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 12:15:17
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 12:15:17.696814   30017 out.go:296] Setting OutFile to fd 1 ...
	I0601 12:15:17.696973   30017 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 12:15:17.696997   30017 out.go:309] Setting ErrFile to fd 2...
	I0601 12:15:17.697002   30017 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 12:15:17.697117   30017 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 12:15:17.697435   30017 out.go:303] Setting JSON to false
	I0601 12:15:17.712247   30017 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":9887,"bootTime":1654101030,"procs":352,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 12:15:17.712361   30017 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 12:15:17.736545   30017 out.go:177] * [newest-cni-20220601121425-16804] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 12:15:17.757487   30017 notify.go:193] Checking for updates...
	I0601 12:15:17.779138   30017 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 12:15:17.801292   30017 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 12:15:17.844142   30017 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 12:15:17.865224   30017 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 12:15:17.886436   30017 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 12:15:17.908918   30017 config.go:178] Loaded profile config "newest-cni-20220601121425-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 12:15:17.909562   30017 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 12:15:17.981700   30017 docker.go:137] docker version: linux-20.10.14
	I0601 12:15:17.981838   30017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 12:15:18.111344   30017 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 19:15:18.053192342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 12:15:18.133307   30017 out.go:177] * Using the docker driver based on existing profile
	I0601 12:15:18.154970   30017 start.go:284] selected driver: docker
	I0601 12:15:18.154995   30017 start.go:806] validating driver "docker" against &{Name:newest-cni-20220601121425-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601121425-16804 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map
[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 12:15:18.155139   30017 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 12:15:18.158589   30017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 12:15:18.288149   30017 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 19:15:18.231685823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 12:15:18.288406   30017 start_flags.go:866] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0601 12:15:18.288423   30017 cni.go:95] Creating CNI manager for ""
	I0601 12:15:18.288431   30017 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 12:15:18.288445   30017 start_flags.go:306] config:
	{Name:newest-cni-20220601121425-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601121425-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false nod
e_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 12:15:18.310379   30017 out.go:177] * Starting control plane node newest-cni-20220601121425-16804 in cluster newest-cni-20220601121425-16804
	I0601 12:15:18.332359   30017 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 12:15:18.354367   30017 out.go:177] * Pulling base image ...
	I0601 12:15:18.397207   30017 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 12:15:18.397218   30017 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 12:15:18.397297   30017 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 12:15:18.397315   30017 cache.go:57] Caching tarball of preloaded images
	I0601 12:15:18.397498   30017 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 12:15:18.397520   30017 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 12:15:18.398565   30017 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601121425-16804/config.json ...
	I0601 12:15:18.463117   30017 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 12:15:18.463133   30017 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 12:15:18.463143   30017 cache.go:206] Successfully downloaded all kic artifacts
	I0601 12:15:18.463194   30017 start.go:352] acquiring machines lock for newest-cni-20220601121425-16804: {Name:mk2d27a35f2c21193ee482d3972539f56f892aa4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 12:15:18.463297   30017 start.go:356] acquired machines lock for "newest-cni-20220601121425-16804" in 67.531µs
	I0601 12:15:18.463320   30017 start.go:94] Skipping create...Using existing machine configuration
	I0601 12:15:18.463327   30017 fix.go:55] fixHost starting: 
	I0601 12:15:18.463535   30017 cli_runner.go:164] Run: docker container inspect newest-cni-20220601121425-16804 --format={{.State.Status}}
	I0601 12:15:18.531996   30017 fix.go:103] recreateIfNeeded on newest-cni-20220601121425-16804: state=Stopped err=<nil>
	W0601 12:15:18.532020   30017 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 12:15:18.575630   30017 out.go:177] * Restarting existing docker container for "newest-cni-20220601121425-16804" ...
	I0601 12:15:18.597027   30017 cli_runner.go:164] Run: docker start newest-cni-20220601121425-16804
	I0601 12:15:18.961320   30017 cli_runner.go:164] Run: docker container inspect newest-cni-20220601121425-16804 --format={{.State.Status}}
	I0601 12:15:19.039434   30017 kic.go:416] container "newest-cni-20220601121425-16804" state is running.
	I0601 12:15:19.040066   30017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220601121425-16804
	I0601 12:15:19.122459   30017 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601121425-16804/config.json ...
	I0601 12:15:19.122847   30017 machine.go:88] provisioning docker machine ...
	I0601 12:15:19.122870   30017 ubuntu.go:169] provisioning hostname "newest-cni-20220601121425-16804"
	I0601 12:15:19.122959   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:19.203538   30017 main.go:134] libmachine: Using SSH client type: native
	I0601 12:15:19.203721   30017 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63286 <nil> <nil>}
	I0601 12:15:19.203734   30017 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220601121425-16804 && echo "newest-cni-20220601121425-16804" | sudo tee /etc/hostname
	I0601 12:15:19.330895   30017 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220601121425-16804
	
	I0601 12:15:19.330974   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:19.406319   30017 main.go:134] libmachine: Using SSH client type: native
	I0601 12:15:19.406535   30017 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63286 <nil> <nil>}
	I0601 12:15:19.406550   30017 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220601121425-16804' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220601121425-16804/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220601121425-16804' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 12:15:19.525134   30017 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 12:15:19.525155   30017 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 12:15:19.525179   30017 ubuntu.go:177] setting up certificates
	I0601 12:15:19.525188   30017 provision.go:83] configureAuth start
	I0601 12:15:19.525249   30017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220601121425-16804
	I0601 12:15:19.605963   30017 provision.go:138] copyHostCerts
	I0601 12:15:19.606064   30017 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 12:15:19.606075   30017 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 12:15:19.606194   30017 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 12:15:19.606452   30017 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 12:15:19.606461   30017 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 12:15:19.606544   30017 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 12:15:19.606746   30017 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 12:15:19.606754   30017 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 12:15:19.606830   30017 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1675 bytes)
	I0601 12:15:19.606964   30017 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220601121425-16804 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220601121425-16804]
	I0601 12:15:19.708984   30017 provision.go:172] copyRemoteCerts
	I0601 12:15:19.709053   30017 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 12:15:19.709100   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:19.783853   30017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63286 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601121425-16804/id_rsa Username:docker}
	I0601 12:15:19.868774   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 12:15:19.886007   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0601 12:15:19.903518   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0601 12:15:19.920926   30017 provision.go:86] duration metric: configureAuth took 395.730263ms
	I0601 12:15:19.920938   30017 ubuntu.go:193] setting minikube options for container-runtime
	I0601 12:15:19.921089   30017 config.go:178] Loaded profile config "newest-cni-20220601121425-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 12:15:19.921150   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:19.993596   30017 main.go:134] libmachine: Using SSH client type: native
	I0601 12:15:19.993740   30017 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63286 <nil> <nil>}
	I0601 12:15:19.993749   30017 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 12:15:20.111525   30017 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 12:15:20.111538   30017 ubuntu.go:71] root file system type: overlay
	I0601 12:15:20.111694   30017 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 12:15:20.111786   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:20.184583   30017 main.go:134] libmachine: Using SSH client type: native
	I0601 12:15:20.184728   30017 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63286 <nil> <nil>}
	I0601 12:15:20.184777   30017 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 12:15:20.308016   30017 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 12:15:20.308149   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:20.384660   30017 main.go:134] libmachine: Using SSH client type: native
	I0601 12:15:20.384802   30017 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63286 <nil> <nil>}
	I0601 12:15:20.384815   30017 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 12:15:20.505728   30017 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 12:15:20.505742   30017 machine.go:91] provisioned docker machine in 1.382897342s
	I0601 12:15:20.505757   30017 start.go:306] post-start starting for "newest-cni-20220601121425-16804" (driver="docker")
	I0601 12:15:20.505772   30017 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 12:15:20.505836   30017 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 12:15:20.505881   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:20.578638   30017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63286 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601121425-16804/id_rsa Username:docker}
	I0601 12:15:20.665477   30017 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 12:15:20.669149   30017 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 12:15:20.669167   30017 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 12:15:20.669174   30017 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 12:15:20.669178   30017 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 12:15:20.669187   30017 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 12:15:20.669292   30017 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 12:15:20.669427   30017 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem -> 168042.pem in /etc/ssl/certs
	I0601 12:15:20.669624   30017 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 12:15:20.677091   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /etc/ssl/certs/168042.pem (1708 bytes)
	I0601 12:15:20.694336   30017 start.go:309] post-start completed in 188.569022ms
	I0601 12:15:20.694408   30017 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 12:15:20.694474   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:20.765912   30017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63286 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601121425-16804/id_rsa Username:docker}
	I0601 12:15:20.848377   30017 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 12:15:20.853146   30017 fix.go:57] fixHost completed within 2.389833759s
	I0601 12:15:20.853157   30017 start.go:81] releasing machines lock for "newest-cni-20220601121425-16804", held for 2.389868555s
	I0601 12:15:20.853232   30017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220601121425-16804
	I0601 12:15:20.927151   30017 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 12:15:20.927156   30017 ssh_runner.go:195] Run: systemctl --version
	I0601 12:15:20.927211   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:20.927230   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:21.005587   30017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63286 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601121425-16804/id_rsa Username:docker}
	I0601 12:15:21.008584   30017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63286 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601121425-16804/id_rsa Username:docker}
	I0601 12:15:21.222895   30017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 12:15:21.235416   30017 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 12:15:21.245540   30017 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 12:15:21.245597   30017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 12:15:21.254909   30017 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 12:15:21.269044   30017 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 12:15:21.339555   30017 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 12:15:21.409052   30017 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 12:15:21.419341   30017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 12:15:21.493119   30017 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 12:15:21.503208   30017 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 12:15:21.539095   30017 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 12:15:21.620908   30017 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0601 12:15:21.621108   30017 cli_runner.go:164] Run: docker exec -t newest-cni-20220601121425-16804 dig +short host.docker.internal
	I0601 12:15:21.756400   30017 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 12:15:21.756560   30017 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 12:15:21.760987   30017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 12:15:21.771888   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:21.870120   30017 out.go:177]   - kubelet.network-plugin=cni
	I0601 12:15:21.891289   30017 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0601 12:15:21.913108   30017 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 12:15:21.913239   30017 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 12:15:21.945830   30017 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0601 12:15:21.945847   30017 docker.go:541] Images already preloaded, skipping extraction
	I0601 12:15:21.945904   30017 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 12:15:21.978568   30017 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0601 12:15:21.978600   30017 cache_images.go:84] Images are preloaded, skipping loading
	I0601 12:15:21.978678   30017 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 12:15:22.052046   30017 cni.go:95] Creating CNI manager for ""
	I0601 12:15:22.052057   30017 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 12:15:22.052077   30017 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0601 12:15:22.052108   30017 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220601121425-16804 NodeName:newest-cni-20220601121425-16804 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:fals
e] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 12:15:22.052229   30017 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "newest-cni-20220601121425-16804"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 12:15:22.052316   30017 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220601121425-16804 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601121425-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 12:15:22.052375   30017 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 12:15:22.060110   30017 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 12:15:22.060163   30017 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 12:15:22.067009   30017 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (415 bytes)
	I0601 12:15:22.079768   30017 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 12:15:22.092592   30017 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2188 bytes)
	I0601 12:15:22.105644   30017 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0601 12:15:22.109585   30017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 12:15:22.119524   30017 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601121425-16804 for IP: 192.168.58.2
	I0601 12:15:22.119629   30017 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 12:15:22.119701   30017 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 12:15:22.119783   30017 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601121425-16804/client.key
	I0601 12:15:22.119849   30017 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601121425-16804/apiserver.key.cee25041
	I0601 12:15:22.119898   30017 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601121425-16804/proxy-client.key
	I0601 12:15:22.120087   30017 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem (1338 bytes)
	W0601 12:15:22.120128   30017 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804_empty.pem, impossibly tiny 0 bytes
	I0601 12:15:22.120139   30017 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1675 bytes)
	I0601 12:15:22.120167   30017 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 12:15:22.120203   30017 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 12:15:22.120233   30017 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1675 bytes)
	I0601 12:15:22.120294   30017 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem (1708 bytes)
	I0601 12:15:22.120897   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601121425-16804/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 12:15:22.138917   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601121425-16804/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 12:15:22.156269   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601121425-16804/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 12:15:22.173707   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601121425-16804/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 12:15:22.191392   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 12:15:22.208705   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 12:15:22.225757   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 12:15:22.243397   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0601 12:15:22.260267   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /usr/share/ca-certificates/168042.pem (1708 bytes)
	I0601 12:15:22.278248   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 12:15:22.295471   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem --> /usr/share/ca-certificates/16804.pem (1338 bytes)
	I0601 12:15:22.313361   30017 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 12:15:22.325966   30017 ssh_runner.go:195] Run: openssl version
	I0601 12:15:22.331944   30017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 12:15:22.339732   30017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 12:15:22.343889   30017 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0601 12:15:22.343932   30017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 12:15:22.349483   30017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 12:15:22.356904   30017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16804.pem && ln -fs /usr/share/ca-certificates/16804.pem /etc/ssl/certs/16804.pem"
	I0601 12:15:22.364729   30017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16804.pem
	I0601 12:15:22.368546   30017 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 18:01 /usr/share/ca-certificates/16804.pem
	I0601 12:15:22.368603   30017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16804.pem
	I0601 12:15:22.373820   30017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16804.pem /etc/ssl/certs/51391683.0"
	I0601 12:15:22.381072   30017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168042.pem && ln -fs /usr/share/ca-certificates/168042.pem /etc/ssl/certs/168042.pem"
	I0601 12:15:22.388832   30017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168042.pem
	I0601 12:15:22.393055   30017 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 18:01 /usr/share/ca-certificates/168042.pem
	I0601 12:15:22.393215   30017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168042.pem
	I0601 12:15:22.399338   30017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168042.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 12:15:22.406808   30017 kubeadm.go:395] StartCluster: {Name:newest-cni-20220601121425-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601121425-16804 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps
_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 12:15:22.406948   30017 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 12:15:22.436239   30017 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 12:15:22.444134   30017 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 12:15:22.444147   30017 kubeadm.go:626] restartCluster start
	I0601 12:15:22.444191   30017 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 12:15:22.451114   30017 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:22.451166   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:22.526099   30017 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220601121425-16804" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 12:15:22.526306   30017 kubeconfig.go:127] "newest-cni-20220601121425-16804" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 12:15:22.526699   30017 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk924f4ba24fa74a0cb052299e0cc4e825b209a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 12:15:22.528121   30017 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 12:15:22.535877   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:22.535949   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:22.544928   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:22.745924   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:22.746094   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:22.756505   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:22.947083   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:22.947296   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:22.957656   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:23.147069   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:23.147288   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:23.157915   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:23.345086   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:23.345192   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:23.353982   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:23.545059   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:23.545242   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:23.555850   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:23.745457   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:23.745640   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:23.756690   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:23.945536   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:23.945668   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:23.955857   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:24.145431   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:24.145578   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:24.155775   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:24.345524   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:24.345625   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:24.356916   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:24.545430   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:24.545593   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:24.556086   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:24.746772   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:24.746951   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:24.757868   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:24.946734   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:24.946835   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:24.957121   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:25.146700   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:25.146894   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:25.158129   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:25.346742   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:25.346876   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:25.356926   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:25.546642   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:25.546735   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:25.556027   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:25.556041   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:25.556098   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:25.564985   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:25.565004   30017 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 12:15:25.565016   30017 kubeadm.go:1092] stopping kube-system containers ...
	I0601 12:15:25.565082   30017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 12:15:25.602459   30017 docker.go:442] Stopping containers: [68d9566a5229 9866045a2740 08fe7c389d05 483211ea09d2 ad8f707a9ba6 acf8b9eb91df 0aace92ddb91 bfd9ea02d125 e12d8d3ebb52 e1445bd1efd3 f50e317e9858 11c48b791323 0b270245a55f c410fd12249e 3a157a1c3457 6ae49c2db4a0 4787fe993ca1 c862ef500594]
	I0601 12:15:25.602539   30017 ssh_runner.go:195] Run: docker stop 68d9566a5229 9866045a2740 08fe7c389d05 483211ea09d2 ad8f707a9ba6 acf8b9eb91df 0aace92ddb91 bfd9ea02d125 e12d8d3ebb52 e1445bd1efd3 f50e317e9858 11c48b791323 0b270245a55f c410fd12249e 3a157a1c3457 6ae49c2db4a0 4787fe993ca1 c862ef500594
	I0601 12:15:25.634269   30017 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 12:15:25.645054   30017 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 12:15:25.653095   30017 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun  1 19:14 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun  1 19:14 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Jun  1 19:14 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jun  1 19:14 /etc/kubernetes/scheduler.conf
	
	I0601 12:15:25.653147   30017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0601 12:15:25.660894   30017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0601 12:15:25.668544   30017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0601 12:15:25.675734   30017 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:25.675782   30017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0601 12:15:25.682775   30017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0601 12:15:25.689821   30017 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:25.689865   30017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0601 12:15:25.697022   30017 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 12:15:25.704775   30017 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 12:15:25.704788   30017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:15:25.750948   30017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:15:26.446128   30017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:15:26.578782   30017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:15:26.628887   30017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:15:26.681058   30017 api_server.go:51] waiting for apiserver process to appear ...
	I0601 12:15:26.681141   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:15:27.192960   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:15:27.692867   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:15:28.192589   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:15:28.203564   30017 api_server.go:71] duration metric: took 1.522538237s to wait for apiserver process to appear ...
	I0601 12:15:28.203585   30017 api_server.go:87] waiting for apiserver healthz status ...
	I0601 12:15:28.203598   30017 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:63285/healthz ...
	I0601 12:15:31.033454   30017 api_server.go:266] https://127.0.0.1:63285/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0601 12:15:31.033469   30017 api_server.go:102] status: https://127.0.0.1:63285/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0601 12:15:31.533706   30017 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:63285/healthz ...
	I0601 12:15:31.539933   30017 api_server.go:266] https://127.0.0.1:63285/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 12:15:31.539949   30017 api_server.go:102] status: https://127.0.0.1:63285/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 12:15:32.033574   30017 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:63285/healthz ...
	I0601 12:15:32.040068   30017 api_server.go:266] https://127.0.0.1:63285/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 12:15:32.040083   30017 api_server.go:102] status: https://127.0.0.1:63285/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 12:15:32.533712   30017 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:63285/healthz ...
	I0601 12:15:32.539591   30017 api_server.go:266] https://127.0.0.1:63285/healthz returned 200:
	ok
	I0601 12:15:32.546412   30017 api_server.go:140] control plane version: v1.23.6
	I0601 12:15:32.546424   30017 api_server.go:130] duration metric: took 4.342864983s to wait for apiserver health ...
	I0601 12:15:32.546432   30017 cni.go:95] Creating CNI manager for ""
	I0601 12:15:32.546437   30017 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 12:15:32.546449   30017 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 12:15:32.556727   30017 system_pods.go:59] 8 kube-system pods found
	I0601 12:15:32.556746   30017 system_pods.go:61] "coredns-64897985d-j2plh" [3a8967e9-d37b-4f71-b57f-0b3a34dbdf08] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0601 12:15:32.556751   30017 system_pods.go:61] "etcd-newest-cni-20220601121425-16804" [c181135a-268d-4847-8dd4-ec0e0f06226e] Running
	I0601 12:15:32.556758   30017 system_pods.go:61] "kube-apiserver-newest-cni-20220601121425-16804" [30ec5624-7260-4516-a9b7-2befbb6626aa] Running
	I0601 12:15:32.556762   30017 system_pods.go:61] "kube-controller-manager-newest-cni-20220601121425-16804" [ecf69675-926e-41de-a951-ddc2afa7194b] Running
	I0601 12:15:32.556767   30017 system_pods.go:61] "kube-proxy-w4cvx" [8cd61f44-5d14-434c-a84e-ffd68ac7bc21] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0601 12:15:32.556773   30017 system_pods.go:61] "kube-scheduler-newest-cni-20220601121425-16804" [15357952-87e8-4636-8cdf-eb7113a0682b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0601 12:15:32.556780   30017 system_pods.go:61] "metrics-server-b955d9d8-x4szx" [caffaac7-3821-49eb-b2de-cc43c2d6c5c8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 12:15:32.556784   30017 system_pods.go:61] "storage-provisioner" [ef765f27-a5f6-468b-9428-8a223e30a190] Running
	I0601 12:15:32.556788   30017 system_pods.go:74] duration metric: took 10.334849ms to wait for pod list to return data ...
	I0601 12:15:32.556794   30017 node_conditions.go:102] verifying NodePressure condition ...
	I0601 12:15:32.561801   30017 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 12:15:32.561816   30017 node_conditions.go:123] node cpu capacity is 6
	I0601 12:15:32.561826   30017 node_conditions.go:105] duration metric: took 5.028617ms to run NodePressure ...
	I0601 12:15:32.561842   30017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:15:32.734164   30017 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 12:15:32.745262   30017 ops.go:34] apiserver oom_adj: -16
	I0601 12:15:32.745275   30017 kubeadm.go:630] restartCluster took 10.301195785s
	I0601 12:15:32.745282   30017 kubeadm.go:397] StartCluster complete in 10.33855509s
	I0601 12:15:32.745298   30017 settings.go:142] acquiring lock: {Name:mk630944d7da2d6f5ad8bc7bd2a815aad6529f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 12:15:32.745396   30017 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 12:15:32.746012   30017 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk924f4ba24fa74a0cb052299e0cc4e825b209a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 12:15:32.749598   30017 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220601121425-16804" rescaled to 1
	I0601 12:15:32.749637   30017 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 12:15:32.749651   30017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 12:15:32.749675   30017 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0601 12:15:32.810213   30017 out.go:177] * Verifying Kubernetes components...
	I0601 12:15:32.749931   30017 config.go:178] Loaded profile config "newest-cni-20220601121425-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 12:15:32.810312   30017 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220601121425-16804"
	I0601 12:15:32.810314   30017 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220601121425-16804"
	I0601 12:15:32.810324   30017 addons.go:65] Setting dashboard=true in profile "newest-cni-20220601121425-16804"
	I0601 12:15:32.810351   30017 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220601121425-16804"
	I0601 12:15:32.814156   30017 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0601 12:15:32.847251   30017 addons.go:153] Setting addon dashboard=true in "newest-cni-20220601121425-16804"
	I0601 12:15:32.847255   30017 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220601121425-16804"
	W0601 12:15:32.847275   30017 addons.go:165] addon dashboard should already be in state true
	I0601 12:15:32.847284   30017 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220601121425-16804"
	W0601 12:15:32.847303   30017 addons.go:165] addon metrics-server should already be in state true
	I0601 12:15:32.847264   30017 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220601121425-16804"
	I0601 12:15:32.847322   30017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	W0601 12:15:32.847352   30017 addons.go:165] addon storage-provisioner should already be in state true
	I0601 12:15:32.847390   30017 host.go:66] Checking if "newest-cni-20220601121425-16804" exists ...
	I0601 12:15:32.847393   30017 host.go:66] Checking if "newest-cni-20220601121425-16804" exists ...
	I0601 12:15:32.847450   30017 host.go:66] Checking if "newest-cni-20220601121425-16804" exists ...
	I0601 12:15:32.847745   30017 cli_runner.go:164] Run: docker container inspect newest-cni-20220601121425-16804 --format={{.State.Status}}
	I0601 12:15:32.848906   30017 cli_runner.go:164] Run: docker container inspect newest-cni-20220601121425-16804 --format={{.State.Status}}
	I0601 12:15:32.848930   30017 cli_runner.go:164] Run: docker container inspect newest-cni-20220601121425-16804 --format={{.State.Status}}
	I0601 12:15:32.849156   30017 cli_runner.go:164] Run: docker container inspect newest-cni-20220601121425-16804 --format={{.State.Status}}
	I0601 12:15:32.874342   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:32.980743   30017 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220601121425-16804"
	W0601 12:15:32.992368   30017 addons.go:165] addon default-storageclass should already be in state true
	I0601 12:15:32.992341   30017 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0601 12:15:32.992427   30017 host.go:66] Checking if "newest-cni-20220601121425-16804" exists ...
	I0601 12:15:33.011979   30017 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 12:15:33.011996   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 12:15:33.012078   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:33.013569   30017 cli_runner.go:164] Run: docker container inspect newest-cni-20220601121425-16804 --format={{.State.Status}}
	I0601 12:15:33.037211   30017 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 12:15:33.068082   30017 api_server.go:51] waiting for apiserver process to appear ...
	I0601 12:15:33.111016   30017 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 12:15:33.111126   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:15:33.148302   30017 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0601 12:15:33.171397   30017 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 12:15:33.192123   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 12:15:33.184596   30017 api_server.go:71] duration metric: took 434.942794ms to wait for apiserver process to appear ...
	I0601 12:15:33.192147   30017 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 12:15:33.192164   30017 api_server.go:87] waiting for apiserver healthz status ...
	I0601 12:15:33.192166   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 12:15:33.192181   30017 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:63285/healthz ...
	I0601 12:15:33.192244   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:33.192273   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:33.206856   30017 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 12:15:33.206882   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 12:15:33.206858   30017 api_server.go:266] https://127.0.0.1:63285/healthz returned 200:
	ok
	I0601 12:15:33.206999   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:33.209882   30017 api_server.go:140] control plane version: v1.23.6
	I0601 12:15:33.209903   30017 api_server.go:130] duration metric: took 17.729398ms to wait for apiserver health ...
	I0601 12:15:33.209909   30017 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 12:15:33.212306   30017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63286 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601121425-16804/id_rsa Username:docker}
	I0601 12:15:33.220067   30017 system_pods.go:59] 8 kube-system pods found
	I0601 12:15:33.220104   30017 system_pods.go:61] "coredns-64897985d-j2plh" [3a8967e9-d37b-4f71-b57f-0b3a34dbdf08] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0601 12:15:33.220124   30017 system_pods.go:61] "etcd-newest-cni-20220601121425-16804" [c181135a-268d-4847-8dd4-ec0e0f06226e] Running
	I0601 12:15:33.220134   30017 system_pods.go:61] "kube-apiserver-newest-cni-20220601121425-16804" [30ec5624-7260-4516-a9b7-2befbb6626aa] Running
	I0601 12:15:33.220141   30017 system_pods.go:61] "kube-controller-manager-newest-cni-20220601121425-16804" [ecf69675-926e-41de-a951-ddc2afa7194b] Running
	I0601 12:15:33.220151   30017 system_pods.go:61] "kube-proxy-w4cvx" [8cd61f44-5d14-434c-a84e-ffd68ac7bc21] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0601 12:15:33.220167   30017 system_pods.go:61] "kube-scheduler-newest-cni-20220601121425-16804" [15357952-87e8-4636-8cdf-eb7113a0682b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0601 12:15:33.220181   30017 system_pods.go:61] "metrics-server-b955d9d8-x4szx" [caffaac7-3821-49eb-b2de-cc43c2d6c5c8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 12:15:33.220201   30017 system_pods.go:61] "storage-provisioner" [ef765f27-a5f6-468b-9428-8a223e30a190] Running
	I0601 12:15:33.220209   30017 system_pods.go:74] duration metric: took 10.294658ms to wait for pod list to return data ...
	I0601 12:15:33.220218   30017 default_sa.go:34] waiting for default service account to be created ...
	I0601 12:15:33.223744   30017 default_sa.go:45] found service account: "default"
	I0601 12:15:33.223760   30017 default_sa.go:55] duration metric: took 3.535466ms for default service account to be created ...
	I0601 12:15:33.223776   30017 kubeadm.go:572] duration metric: took 474.122479ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0601 12:15:33.223798   30017 node_conditions.go:102] verifying NodePressure condition ...
	I0601 12:15:33.228770   30017 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 12:15:33.228786   30017 node_conditions.go:123] node cpu capacity is 6
	I0601 12:15:33.228800   30017 node_conditions.go:105] duration metric: took 4.995287ms to run NodePressure ...
	I0601 12:15:33.228813   30017 start.go:213] waiting for startup goroutines ...
	I0601 12:15:33.301789   30017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63286 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601121425-16804/id_rsa Username:docker}
	I0601 12:15:33.313361   30017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63286 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601121425-16804/id_rsa Username:docker}
	I0601 12:15:33.319829   30017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63286 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601121425-16804/id_rsa Username:docker}
	I0601 12:15:33.382558   30017 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 12:15:33.382572   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 12:15:33.463411   30017 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 12:15:33.463454   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 12:15:33.479163   30017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 12:15:33.479594   30017 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 12:15:33.479617   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 12:15:33.484722   30017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 12:15:33.491345   30017 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 12:15:33.491376   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 12:15:33.575793   30017 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 12:15:33.575859   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 12:15:33.593652   30017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 12:15:33.681482   30017 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 12:15:33.681500   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 12:15:33.857868   30017 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 12:15:33.857885   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 12:15:33.892748   30017 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 12:15:33.892767   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 12:15:33.984332   30017 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 12:15:33.984347   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 12:15:34.064156   30017 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 12:15:34.064169   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 12:15:34.086338   30017 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 12:15:34.086357   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 12:15:34.173318   30017 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 12:15:34.173333   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 12:15:34.196575   30017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 12:15:34.695837   30017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.216659486s)
	I0601 12:15:34.695873   30017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.211144239s)
	I0601 12:15:34.757351   30017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.163679978s)
	I0601 12:15:34.757383   30017 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20220601121425-16804"
	I0601 12:15:34.884487   30017 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0601 12:15:34.905569   30017 addons.go:417] enableAddons completed in 2.155912622s
	I0601 12:15:34.935673   30017 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0601 12:15:34.958503   30017 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220601121425-16804" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-01 18:53:51 UTC, end at Wed 2022-06-01 19:20:47 UTC. --
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 systemd[1]: Starting Docker Application Container Engine...
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.457800407Z" level=info msg="Starting up"
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.459880544Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.459918540Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.459935542Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.459943396Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.461558394Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.461592263Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.461607683Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.461615678Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.467062010Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.471139789Z" level=info msg="Loading containers: start."
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.555493702Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.587145357Z" level=info msg="Loading containers: done."
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.597281456Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.597355151Z" level=info msg="Daemon has completed initialization"
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 systemd[1]: Started Docker Application Container Engine.
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.622139295Z" level=info msg="API listen on [::]:2376"
	Jun 01 18:53:51 old-k8s-version-20220601114806-16804 dockerd[131]: time="2022-06-01T18:53:51.626019498Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2022-06-01T19:20:49Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  19:20:50 up  1:23,  0 users,  load average: 0.08, 0.71, 0.80
	Linux old-k8s-version-20220601114806-16804 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 18:53:51 UTC, end at Wed 2022-06-01 19:20:50 UTC. --
	Jun 01 19:20:48 old-k8s-version-20220601114806-16804 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 01 19:20:49 old-k8s-version-20220601114806-16804 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1668.
	Jun 01 19:20:49 old-k8s-version-20220601114806-16804 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 01 19:20:49 old-k8s-version-20220601114806-16804 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 01 19:20:49 old-k8s-version-20220601114806-16804 kubelet[34102]: I0601 19:20:49.385788   34102 server.go:410] Version: v1.16.0
	Jun 01 19:20:49 old-k8s-version-20220601114806-16804 kubelet[34102]: I0601 19:20:49.385989   34102 plugins.go:100] No cloud provider specified.
	Jun 01 19:20:49 old-k8s-version-20220601114806-16804 kubelet[34102]: I0601 19:20:49.386001   34102 server.go:773] Client rotation is on, will bootstrap in background
	Jun 01 19:20:49 old-k8s-version-20220601114806-16804 kubelet[34102]: I0601 19:20:49.387682   34102 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 01 19:20:49 old-k8s-version-20220601114806-16804 kubelet[34102]: W0601 19:20:49.388385   34102 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jun 01 19:20:49 old-k8s-version-20220601114806-16804 kubelet[34102]: W0601 19:20:49.388447   34102 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jun 01 19:20:49 old-k8s-version-20220601114806-16804 kubelet[34102]: F0601 19:20:49.388479   34102 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jun 01 19:20:49 old-k8s-version-20220601114806-16804 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 01 19:20:49 old-k8s-version-20220601114806-16804 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 01 19:20:50 old-k8s-version-20220601114806-16804 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1669.
	Jun 01 19:20:50 old-k8s-version-20220601114806-16804 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 01 19:20:50 old-k8s-version-20220601114806-16804 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 01 19:20:50 old-k8s-version-20220601114806-16804 kubelet[34138]: I0601 19:20:50.140490   34138 server.go:410] Version: v1.16.0
	Jun 01 19:20:50 old-k8s-version-20220601114806-16804 kubelet[34138]: I0601 19:20:50.140970   34138 plugins.go:100] No cloud provider specified.
	Jun 01 19:20:50 old-k8s-version-20220601114806-16804 kubelet[34138]: I0601 19:20:50.141047   34138 server.go:773] Client rotation is on, will bootstrap in background
	Jun 01 19:20:50 old-k8s-version-20220601114806-16804 kubelet[34138]: I0601 19:20:50.143222   34138 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 01 19:20:50 old-k8s-version-20220601114806-16804 kubelet[34138]: W0601 19:20:50.144420   34138 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jun 01 19:20:50 old-k8s-version-20220601114806-16804 kubelet[34138]: W0601 19:20:50.144514   34138 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jun 01 19:20:50 old-k8s-version-20220601114806-16804 kubelet[34138]: F0601 19:20:50.144644   34138 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jun 01 19:20:50 old-k8s-version-20220601114806-16804 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 01 19:20:50 old-k8s-version-20220601114806-16804 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 12:20:49.940120   30382 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220601114806-16804 -n old-k8s-version-20220601114806-16804
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220601114806-16804 -n old-k8s-version-20220601114806-16804: exit status 2 (431.678564ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-20220601114806-16804" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (555.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (43.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-different-port-20220601120641-16804 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220601120641-16804 -n default-k8s-different-port-20220601120641-16804

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220601120641-16804 -n default-k8s-different-port-20220601120641-16804: exit status 2 (16.116685014s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220601120641-16804 -n default-k8s-different-port-20220601120641-16804

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220601120641-16804 -n default-k8s-different-port-20220601120641-16804: exit status 2 (16.108053617s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-different-port-20220601120641-16804 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220601120641-16804 -n default-k8s-different-port-20220601120641-16804
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220601120641-16804 -n default-k8s-different-port-20220601120641-16804
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220601120641-16804
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220601120641-16804:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4bda9ca272bbbdd9dec043dec560cebe0bf845d8c6cf657de9440077f12c6362",
	        "Created": "2022-06-01T19:06:48.165680259Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 256563,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T19:07:47.977315615Z",
	            "FinishedAt": "2022-06-01T19:07:45.986695989Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/4bda9ca272bbbdd9dec043dec560cebe0bf845d8c6cf657de9440077f12c6362/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4bda9ca272bbbdd9dec043dec560cebe0bf845d8c6cf657de9440077f12c6362/hostname",
	        "HostsPath": "/var/lib/docker/containers/4bda9ca272bbbdd9dec043dec560cebe0bf845d8c6cf657de9440077f12c6362/hosts",
	        "LogPath": "/var/lib/docker/containers/4bda9ca272bbbdd9dec043dec560cebe0bf845d8c6cf657de9440077f12c6362/4bda9ca272bbbdd9dec043dec560cebe0bf845d8c6cf657de9440077f12c6362-json.log",
	        "Name": "/default-k8s-different-port-20220601120641-16804",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220601120641-16804:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220601120641-16804",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b7b295dfc67afedd39845b9179bc3786b718d6567ab92bcfd7c61410315d8780-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b7b295dfc67afedd39845b9179bc3786b718d6567ab92bcfd7c61410315d8780/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b7b295dfc67afedd39845b9179bc3786b718d6567ab92bcfd7c61410315d8780/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b7b295dfc67afedd39845b9179bc3786b718d6567ab92bcfd7c61410315d8780/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220601120641-16804",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220601120641-16804/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220601120641-16804",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220601120641-16804",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220601120641-16804",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cedfbd388f3387f90017da6086733698d8a2f3c09529b6401e6d01e3bb16ba75",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61977"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61978"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61979"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61980"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61981"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/cedfbd388f33",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220601120641-16804": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "4bda9ca272bb",
	                        "default-k8s-different-port-20220601120641-16804"
	                    ],
	                    "NetworkID": "df289b10364773815e73fc407f32919c59e23733b1e76528cfb0d723d90782ba",
	                    "EndpointID": "8a163356dbb1c8f00a73a8e242f6a0f0f4c2e4a2c0e8539a4ff438ce129b077d",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220601120641-16804 -n default-k8s-different-port-20220601120641-16804
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-different-port-20220601120641-16804 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p default-k8s-different-port-20220601120641-16804 logs -n 25: (2.724926767s)
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                     Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p                                                | no-preload-20220601115057-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | no-preload-20220601115057-16804                   |                                                 |         |                |                     |                     |
	| delete  | -p                                                | no-preload-20220601115057-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | no-preload-20220601115057-16804                   |                                                 |         |                |                     |                     |
	| start   | -p                                                | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:59 PDT |
	|         | embed-certs-20220601115855-16804                  |                                                 |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |                |                     |                     |
	|         | --wait=true --embed-certs                         |                                                 |         |                |                     |                     |
	|         | --driver=docker                                   |                                                 |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                 |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:59 PDT | 01 Jun 22 11:59 PDT |
	|         | embed-certs-20220601115855-16804                  |                                                 |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |                |                     |                     |
	| stop    | -p                                                | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:59 PDT | 01 Jun 22 11:59 PDT |
	|         | embed-certs-20220601115855-16804                  |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |                |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:59 PDT | 01 Jun 22 11:59 PDT |
	|         | embed-certs-20220601115855-16804                  |                                                 |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |                |                     |                     |
	| logs    | old-k8s-version-20220601114806-16804              | old-k8s-version-20220601114806-16804            | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:01 PDT | 01 Jun 22 12:02 PDT |
	|         | logs -n 25                                        |                                                 |         |                |                     |                     |
	| start   | -p                                                | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:59 PDT | 01 Jun 22 12:05 PDT |
	|         | embed-certs-20220601115855-16804                  |                                                 |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |                |                     |                     |
	|         | --wait=true --embed-certs                         |                                                 |         |                |                     |                     |
	|         | --driver=docker                                   |                                                 |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                 |         |                |                     |                     |
	| ssh     | -p                                                | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:05 PDT | 01 Jun 22 12:05 PDT |
	|         | embed-certs-20220601115855-16804                  |                                                 |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |                |                     |                     |
	| pause   | -p                                                | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:05 PDT | 01 Jun 22 12:05 PDT |
	|         | embed-certs-20220601115855-16804                  |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |                |                     |                     |
	| unpause | -p                                                | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:06 PDT |
	|         | embed-certs-20220601115855-16804                  |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |                |                     |                     |
	| logs    | embed-certs-20220601115855-16804                  | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:06 PDT |
	|         | logs -n 25                                        |                                                 |         |                |                     |                     |
	| logs    | embed-certs-20220601115855-16804                  | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:06 PDT |
	|         | logs -n 25                                        |                                                 |         |                |                     |                     |
	| delete  | -p                                                | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:06 PDT |
	|         | embed-certs-20220601115855-16804                  |                                                 |         |                |                     |                     |
	| delete  | -p                                                | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:06 PDT |
	|         | embed-certs-20220601115855-16804                  |                                                 |         |                |                     |                     |
	| delete  | -p                                                | disable-driver-mounts-20220601120640-16804      | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:06 PDT |
	|         | disable-driver-mounts-20220601120640-16804        |                                                 |         |                |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:07 PDT |
	|         | default-k8s-different-port-20220601120641-16804   |                                                 |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                 |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:07 PDT | 01 Jun 22 12:07 PDT |
	|         | default-k8s-different-port-20220601120641-16804   |                                                 |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |                |                     |                     |
	| stop    | -p                                                | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:07 PDT | 01 Jun 22 12:07 PDT |
	|         | default-k8s-different-port-20220601120641-16804   |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |                |                     |                     |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:07 PDT | 01 Jun 22 12:07 PDT |
	|         | default-k8s-different-port-20220601120641-16804   |                                                 |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |                |                     |                     |
	| logs    | old-k8s-version-20220601114806-16804              | old-k8s-version-20220601114806-16804            | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:11 PDT | 01 Jun 22 12:11 PDT |
	|         | logs -n 25                                        |                                                 |         |                |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:07 PDT | 01 Jun 22 12:13 PDT |
	|         | default-k8s-different-port-20220601120641-16804   |                                                 |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                 |         |                |                     |                     |
	| ssh     | -p                                                | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:13 PDT | 01 Jun 22 12:13 PDT |
	|         | default-k8s-different-port-20220601120641-16804   |                                                 |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |                |                     |                     |
	| pause   | -p                                                | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:13 PDT | 01 Jun 22 12:13 PDT |
	|         | default-k8s-different-port-20220601120641-16804   |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |                |                     |                     |
	| unpause | -p                                                | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:14 PDT | 01 Jun 22 12:14 PDT |
	|         | default-k8s-different-port-20220601120641-16804   |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |                |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 12:07:46
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 12:07:46.714016   29448 out.go:296] Setting OutFile to fd 1 ...
	I0601 12:07:46.714188   29448 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 12:07:46.714193   29448 out.go:309] Setting ErrFile to fd 2...
	I0601 12:07:46.714197   29448 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 12:07:46.714298   29448 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 12:07:46.714566   29448 out.go:303] Setting JSON to false
	I0601 12:07:46.729641   29448 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":9436,"bootTime":1654101030,"procs":353,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 12:07:46.729740   29448 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 12:07:46.752140   29448 out.go:177] * [default-k8s-different-port-20220601120641-16804] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 12:07:46.795651   29448 notify.go:193] Checking for updates...
	I0601 12:07:46.817606   29448 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 12:07:46.839623   29448 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 12:07:46.860438   29448 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 12:07:46.881832   29448 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 12:07:46.903780   29448 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 12:07:46.926112   29448 config.go:178] Loaded profile config "default-k8s-different-port-20220601120641-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 12:07:46.926797   29448 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 12:07:47.000143   29448 docker.go:137] docker version: linux-20.10.14
	I0601 12:07:47.000297   29448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 12:07:47.131409   29448 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 19:07:47.070748627 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 12:07:47.153289   29448 out.go:177] * Using the docker driver based on existing profile
	I0601 12:07:47.173957   29448 start.go:284] selected driver: docker
	I0601 12:07:47.173973   29448 start.go:806] validating driver "docker" against &{Name:default-k8s-different-port-20220601120641-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port
-20220601120641-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:tru
e] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 12:07:47.174080   29448 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 12:07:47.176304   29448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 12:07:47.306114   29448 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 19:07:47.248401536 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 12:07:47.306271   29448 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 12:07:47.306288   29448 cni.go:95] Creating CNI manager for ""
	I0601 12:07:47.306295   29448 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 12:07:47.306302   29448 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220601120641-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601120641-16804 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 12:07:47.350023   29448 out.go:177] * Starting control plane node default-k8s-different-port-20220601120641-16804 in cluster default-k8s-different-port-20220601120641-16804
	I0601 12:07:47.372290   29448 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 12:07:47.393752   29448 out.go:177] * Pulling base image ...
	I0601 12:07:47.437193   29448 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 12:07:47.437222   29448 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 12:07:47.437287   29448 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 12:07:47.437322   29448 cache.go:57] Caching tarball of preloaded images
	I0601 12:07:47.437521   29448 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 12:07:47.437544   29448 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 12:07:47.438529   29448 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/config.json ...
	I0601 12:07:47.502152   29448 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 12:07:47.502172   29448 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 12:07:47.502180   29448 cache.go:206] Successfully downloaded all kic artifacts
	I0601 12:07:47.502221   29448 start.go:352] acquiring machines lock for default-k8s-different-port-20220601120641-16804: {Name:mk5000a48e15938a8ff193f7b1e0ef0205ca69c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 12:07:47.502307   29448 start.go:356] acquired machines lock for "default-k8s-different-port-20220601120641-16804" in 54.718µs
	I0601 12:07:47.502327   29448 start.go:94] Skipping create...Using existing machine configuration
	I0601 12:07:47.502337   29448 fix.go:55] fixHost starting: 
	I0601 12:07:47.502581   29448 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601120641-16804 --format={{.State.Status}}
	I0601 12:07:47.570243   29448 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220601120641-16804: state=Stopped err=<nil>
	W0601 12:07:47.570270   29448 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 12:07:47.592778   29448 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220601120641-16804" ...
	I0601 12:07:47.614873   29448 cli_runner.go:164] Run: docker start default-k8s-different-port-20220601120641-16804
	I0601 12:07:47.973167   29448 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601120641-16804 --format={{.State.Status}}
	I0601 12:07:48.048677   29448 kic.go:416] container "default-k8s-different-port-20220601120641-16804" state is running.
	I0601 12:07:48.049618   29448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601120641-16804
	I0601 12:07:48.132914   29448 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/config.json ...
	I0601 12:07:48.133339   29448 machine.go:88] provisioning docker machine ...
	I0601 12:07:48.133364   29448 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220601120641-16804"
	I0601 12:07:48.133419   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:48.212122   29448 main.go:134] libmachine: Using SSH client type: native
	I0601 12:07:48.212345   29448 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 61977 <nil> <nil>}
	I0601 12:07:48.212357   29448 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220601120641-16804 && echo "default-k8s-different-port-20220601120641-16804" | sudo tee /etc/hostname
	I0601 12:07:48.344170   29448 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220601120641-16804
	
	I0601 12:07:48.344259   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:48.422969   29448 main.go:134] libmachine: Using SSH client type: native
	I0601 12:07:48.423135   29448 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 61977 <nil> <nil>}
	I0601 12:07:48.423162   29448 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220601120641-16804' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220601120641-16804/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220601120641-16804' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 12:07:48.544579   29448 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 12:07:48.544600   29448 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 12:07:48.544627   29448 ubuntu.go:177] setting up certificates
	I0601 12:07:48.544647   29448 provision.go:83] configureAuth start
	I0601 12:07:48.544718   29448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601120641-16804
	I0601 12:07:48.622717   29448 provision.go:138] copyHostCerts
	I0601 12:07:48.622832   29448 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 12:07:48.622842   29448 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 12:07:48.622937   29448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 12:07:48.623147   29448 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 12:07:48.623156   29448 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 12:07:48.623223   29448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 12:07:48.623375   29448 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 12:07:48.623383   29448 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 12:07:48.623455   29448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1675 bytes)
	I0601 12:07:48.623608   29448 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220601120641-16804 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220601120641-16804]
	I0601 12:07:48.807397   29448 provision.go:172] copyRemoteCerts
	I0601 12:07:48.807465   29448 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 12:07:48.807513   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:48.880166   29448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61977 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601120641-16804/id_rsa Username:docker}
	I0601 12:07:48.968528   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 12:07:48.985675   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0601 12:07:49.003100   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0601 12:07:49.020622   29448 provision.go:86] duration metric: configureAuth took 475.965498ms
	I0601 12:07:49.020634   29448 ubuntu.go:193] setting minikube options for container-runtime
	I0601 12:07:49.020838   29448 config.go:178] Loaded profile config "default-k8s-different-port-20220601120641-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 12:07:49.020914   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:49.093673   29448 main.go:134] libmachine: Using SSH client type: native
	I0601 12:07:49.093829   29448 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 61977 <nil> <nil>}
	I0601 12:07:49.093841   29448 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 12:07:49.210457   29448 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 12:07:49.210469   29448 ubuntu.go:71] root file system type: overlay
	I0601 12:07:49.210594   29448 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 12:07:49.210662   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:49.283147   29448 main.go:134] libmachine: Using SSH client type: native
	I0601 12:07:49.283317   29448 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 61977 <nil> <nil>}
	I0601 12:07:49.283384   29448 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 12:07:49.409302   29448 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 12:07:49.409387   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:49.484156   29448 main.go:134] libmachine: Using SSH client type: native
	I0601 12:07:49.484332   29448 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 61977 <nil> <nil>}
	I0601 12:07:49.484346   29448 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 12:07:49.604444   29448 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 12:07:49.604461   29448 machine.go:91] provisioned docker machine in 1.471129128s
	I0601 12:07:49.604471   29448 start.go:306] post-start starting for "default-k8s-different-port-20220601120641-16804" (driver="docker")
	I0601 12:07:49.604477   29448 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 12:07:49.604532   29448 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 12:07:49.604575   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:49.678684   29448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61977 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601120641-16804/id_rsa Username:docker}
	I0601 12:07:49.764315   29448 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 12:07:49.767903   29448 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 12:07:49.767938   29448 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 12:07:49.767950   29448 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 12:07:49.767956   29448 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 12:07:49.767967   29448 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 12:07:49.768069   29448 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 12:07:49.768203   29448 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem -> 168042.pem in /etc/ssl/certs
	I0601 12:07:49.768341   29448 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 12:07:49.775308   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /etc/ssl/certs/168042.pem (1708 bytes)
	I0601 12:07:49.792544   29448 start.go:309] post-start completed in 188.064447ms
	I0601 12:07:49.792632   29448 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 12:07:49.792692   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:49.865476   29448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61977 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601120641-16804/id_rsa Username:docker}
	I0601 12:07:49.948688   29448 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 12:07:49.953166   29448 fix.go:57] fixHost completed within 2.450856104s
	I0601 12:07:49.953184   29448 start.go:81] releasing machines lock for "default-k8s-different-port-20220601120641-16804", held for 2.450894668s
	I0601 12:07:49.953267   29448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601120641-16804
	I0601 12:07:50.025599   29448 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 12:07:50.025607   29448 ssh_runner.go:195] Run: systemctl --version
	I0601 12:07:50.025662   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:50.025679   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:50.104052   29448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61977 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601120641-16804/id_rsa Username:docker}
	I0601 12:07:50.107382   29448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61977 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601120641-16804/id_rsa Username:docker}
	I0601 12:07:50.327885   29448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 12:07:50.339355   29448 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 12:07:50.349315   29448 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 12:07:50.349405   29448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 12:07:50.358883   29448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 12:07:50.372373   29448 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 12:07:50.437808   29448 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 12:07:50.507439   29448 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 12:07:50.517896   29448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 12:07:50.595253   29448 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 12:07:50.605452   29448 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 12:07:50.643710   29448 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 12:07:50.725451   29448 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0601 12:07:50.725679   29448 cli_runner.go:164] Run: docker exec -t default-k8s-different-port-20220601120641-16804 dig +short host.docker.internal
	I0601 12:07:50.872144   29448 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 12:07:50.872230   29448 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 12:07:50.877033   29448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 12:07:50.888570   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:50.963632   29448 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 12:07:50.963715   29448 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 12:07:50.995814   29448 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0601 12:07:50.995829   29448 docker.go:541] Images already preloaded, skipping extraction
	I0601 12:07:50.995925   29448 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 12:07:51.027279   29448 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0601 12:07:51.027298   29448 cache_images.go:84] Images are preloaded, skipping loading
	I0601 12:07:51.027382   29448 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 12:07:51.102384   29448 cni.go:95] Creating CNI manager for ""
	I0601 12:07:51.102395   29448 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 12:07:51.102413   29448 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 12:07:51.102444   29448 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8444 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220601120641-16804 NodeName:default-k8s-different-port-20220601120641-16804 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 Cgro
upDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 12:07:51.102543   29448 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "default-k8s-different-port-20220601120641-16804"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 12:07:51.102663   29448 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=default-k8s-different-port-20220601120641-16804 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601120641-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0601 12:07:51.102752   29448 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 12:07:51.110604   29448 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 12:07:51.110649   29448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 12:07:51.117796   29448 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0601 12:07:51.130564   29448 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 12:07:51.142900   29448 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2068 bytes)
	I0601 12:07:51.156672   29448 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0601 12:07:51.160712   29448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 12:07:51.170404   29448 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804 for IP: 192.168.58.2
	I0601 12:07:51.170523   29448 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 12:07:51.170574   29448 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 12:07:51.170655   29448 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/client.key
	I0601 12:07:51.170735   29448 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/apiserver.key.cee25041
	I0601 12:07:51.170798   29448 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/proxy-client.key
	I0601 12:07:51.170999   29448 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem (1338 bytes)
	W0601 12:07:51.171039   29448 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804_empty.pem, impossibly tiny 0 bytes
	I0601 12:07:51.171051   29448 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1675 bytes)
	I0601 12:07:51.171085   29448 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 12:07:51.171121   29448 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 12:07:51.171151   29448 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1675 bytes)
	I0601 12:07:51.171217   29448 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem (1708 bytes)
	I0601 12:07:51.171773   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 12:07:51.189027   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 12:07:51.206439   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 12:07:51.223833   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0601 12:07:51.241255   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 12:07:51.258545   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 12:07:51.275931   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 12:07:51.293213   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0601 12:07:51.310440   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /usr/share/ca-certificates/168042.pem (1708 bytes)
	I0601 12:07:51.327345   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 12:07:51.344962   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem --> /usr/share/ca-certificates/16804.pem (1338 bytes)
	I0601 12:07:51.362940   29448 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 12:07:51.376186   29448 ssh_runner.go:195] Run: openssl version
	I0601 12:07:51.381980   29448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16804.pem && ln -fs /usr/share/ca-certificates/16804.pem /etc/ssl/certs/16804.pem"
	I0601 12:07:51.389866   29448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16804.pem
	I0601 12:07:51.393905   29448 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 18:01 /usr/share/ca-certificates/16804.pem
	I0601 12:07:51.393948   29448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16804.pem
	I0601 12:07:51.400411   29448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16804.pem /etc/ssl/certs/51391683.0"
	I0601 12:07:51.408002   29448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168042.pem && ln -fs /usr/share/ca-certificates/168042.pem /etc/ssl/certs/168042.pem"
	I0601 12:07:51.415937   29448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168042.pem
	I0601 12:07:51.420272   29448 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 18:01 /usr/share/ca-certificates/168042.pem
	I0601 12:07:51.420316   29448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168042.pem
	I0601 12:07:51.426141   29448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168042.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 12:07:51.433640   29448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 12:07:51.442012   29448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 12:07:51.446045   29448 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0601 12:07:51.446083   29448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 12:07:51.451363   29448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 12:07:51.459039   29448 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220601120641-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601120641-1680
4 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 12:07:51.459130   29448 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 12:07:51.489623   29448 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 12:07:51.497725   29448 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 12:07:51.497738   29448 kubeadm.go:626] restartCluster start
	I0601 12:07:51.497782   29448 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 12:07:51.504873   29448 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:51.504936   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:51.581506   29448 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220601120641-16804" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 12:07:51.581714   29448 kubeconfig.go:127] "default-k8s-different-port-20220601120641-16804" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 12:07:51.582051   29448 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk924f4ba24fa74a0cb052299e0cc4e825b209a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 12:07:51.583188   29448 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 12:07:51.591318   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:51.591366   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:51.600659   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:51.802801   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:51.803000   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:51.813919   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:52.002131   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:52.002333   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:52.013050   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:52.202762   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:52.203003   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:52.214293   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:52.400742   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:52.401039   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:52.413371   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:52.602798   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:52.603014   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:52.614030   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:52.802753   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:52.802902   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:52.813453   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:53.002110   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:53.002210   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:53.012890   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:53.200734   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:53.200808   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:53.209534   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:53.402797   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:53.402935   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:53.413942   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:53.602772   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:53.602954   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:53.614236   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:53.802625   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:53.802807   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:53.813315   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:54.000805   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:54.000963   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:54.011753   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:54.201071   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:54.201206   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:54.210732   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:54.401125   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:54.401238   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:54.411188   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:54.601290   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:54.601393   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:54.611951   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:54.611961   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:54.612012   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:54.620879   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:54.620892   29448 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 12:07:54.620904   29448 kubeadm.go:1092] stopping kube-system containers ...
	I0601 12:07:54.620958   29448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 12:07:54.654891   29448 docker.go:442] Stopping containers: [7328817f3bb4 d3f44f8f8e39 134c635592c8 46d8169c54fd a771108a72ba a3f49451d3a0 607c9ad659d0 8a911f22f085 e379e0b74a15 d25d7a042066 b1e1d206888c 93f762382a29 715955d40c64 a75eb9d31e2c b1116ac2ed18 30914a4918f1]
	I0601 12:07:54.654963   29448 ssh_runner.go:195] Run: docker stop 7328817f3bb4 d3f44f8f8e39 134c635592c8 46d8169c54fd a771108a72ba a3f49451d3a0 607c9ad659d0 8a911f22f085 e379e0b74a15 d25d7a042066 b1e1d206888c 93f762382a29 715955d40c64 a75eb9d31e2c b1116ac2ed18 30914a4918f1
	I0601 12:07:54.686689   29448 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 12:07:54.699901   29448 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 12:07:54.707795   29448 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun  1 19:06 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jun  1 19:06 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Jun  1 19:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jun  1 19:06 /etc/kubernetes/scheduler.conf
	
	I0601 12:07:54.707845   29448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0601 12:07:54.716136   29448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0601 12:07:54.724080   29448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0601 12:07:54.731538   29448 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:54.731581   29448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0601 12:07:54.738546   29448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0601 12:07:54.745577   29448 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:54.745680   29448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0601 12:07:54.752549   29448 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 12:07:54.759719   29448 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 12:07:54.759733   29448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:07:54.804484   29448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:07:55.824635   29448 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.020145427s)
	I0601 12:07:55.824694   29448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:07:55.951077   29448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:07:56.004952   29448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:07:56.056152   29448 api_server.go:51] waiting for apiserver process to appear ...
	I0601 12:07:56.056230   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:07:56.577348   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:07:57.077226   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:07:57.577374   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:07:57.591888   29448 api_server.go:71] duration metric: took 1.535763125s to wait for apiserver process to appear ...
	I0601 12:07:57.591909   29448 api_server.go:87] waiting for apiserver healthz status ...
	I0601 12:07:57.591919   29448 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61981/healthz ...
	I0601 12:08:00.189999   29448 api_server.go:266] https://127.0.0.1:61981/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0601 12:08:00.190016   29448 api_server.go:102] status: https://127.0.0.1:61981/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0601 12:08:00.691046   29448 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61981/healthz ...
	I0601 12:08:00.696359   29448 api_server.go:266] https://127.0.0.1:61981/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 12:08:00.696371   29448 api_server.go:102] status: https://127.0.0.1:61981/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 12:08:01.190364   29448 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61981/healthz ...
	I0601 12:08:01.197216   29448 api_server.go:266] https://127.0.0.1:61981/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 12:08:01.197234   29448 api_server.go:102] status: https://127.0.0.1:61981/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 12:08:01.692213   29448 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61981/healthz ...
	I0601 12:08:01.699656   29448 api_server.go:266] https://127.0.0.1:61981/healthz returned 200:
	ok
	I0601 12:08:01.706073   29448 api_server.go:140] control plane version: v1.23.6
	I0601 12:08:01.706084   29448 api_server.go:130] duration metric: took 4.11422006s to wait for apiserver health ...
	I0601 12:08:01.706092   29448 cni.go:95] Creating CNI manager for ""
	I0601 12:08:01.706097   29448 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 12:08:01.706108   29448 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 12:08:01.712337   29448 system_pods.go:59] 8 kube-system pods found
	I0601 12:08:01.712354   29448 system_pods.go:61] "coredns-64897985d-v5l86" [cebeba0e-d16c-4439-973e-3ddc9003cc40] Running
	I0601 12:08:01.712358   29448 system_pods.go:61] "etcd-default-k8s-different-port-20220601120641-16804" [c387f857-e5ff-45bd-b88c-09e06c1626b3] Running
	I0601 12:08:01.712366   29448 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220601120641-16804" [b256af8c-900c-49b6-b749-7d33ef7179e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0601 12:08:01.712376   29448 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220601120641-16804" [4dbe125a-f3ba-4200-85cb-744388b849ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0601 12:08:01.712381   29448 system_pods.go:61] "kube-proxy-7kqlg" [c5fea19e-e60f-4b90-b2e0-76618c2b78cc] Running
	I0601 12:08:01.712387   29448 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220601120641-16804" [cde39bae-3f41-4858-a543-60f81bff3509] Running
	I0601 12:08:01.712391   29448 system_pods.go:61] "metrics-server-b955d9d8-48tdv" [0c245d32-4061-4d02-b798-d0766b893fc6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 12:08:01.712395   29448 system_pods.go:61] "storage-provisioner" [e70fe26d-b8cb-4d3d-8e22-76d353fcb4c8] Running
	I0601 12:08:01.712399   29448 system_pods.go:74] duration metric: took 6.286581ms to wait for pod list to return data ...
	I0601 12:08:01.712405   29448 node_conditions.go:102] verifying NodePressure condition ...
	I0601 12:08:01.715083   29448 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 12:08:01.735735   29448 node_conditions.go:123] node cpu capacity is 6
	I0601 12:08:01.735751   29448 node_conditions.go:105] duration metric: took 23.342838ms to run NodePressure ...
	I0601 12:08:01.735781   29448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:08:01.859703   29448 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0601 12:08:01.863929   29448 kubeadm.go:777] kubelet initialised
	I0601 12:08:01.863940   29448 kubeadm.go:778] duration metric: took 4.22226ms waiting for restarted kubelet to initialise ...
	I0601 12:08:01.863948   29448 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 12:08:01.874140   29448 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-v5l86" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:01.878366   29448 pod_ready.go:92] pod "coredns-64897985d-v5l86" in "kube-system" namespace has status "Ready":"True"
	I0601 12:08:01.878375   29448 pod_ready.go:81] duration metric: took 4.22218ms waiting for pod "coredns-64897985d-v5l86" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:01.878381   29448 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:01.883193   29448 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:08:01.883207   29448 pod_ready.go:81] duration metric: took 4.820642ms waiting for pod "etcd-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:01.883218   29448 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:03.899247   29448 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:06.396930   29448 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:08.397693   29448 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:10.899832   29448 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:13.396683   29448 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:15.397145   29448 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:08:15.397158   29448 pod_ready.go:81] duration metric: took 13.514096644s waiting for pod "kube-apiserver-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:15.397165   29448 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:15.401295   29448 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:08:15.401303   29448 pod_ready.go:81] duration metric: took 4.132737ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:15.401309   29448 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7kqlg" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:15.405245   29448 pod_ready.go:92] pod "kube-proxy-7kqlg" in "kube-system" namespace has status "Ready":"True"
	I0601 12:08:15.405253   29448 pod_ready.go:81] duration metric: took 3.9394ms waiting for pod "kube-proxy-7kqlg" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:15.405259   29448 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:15.409049   29448 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:08:15.409056   29448 pod_ready.go:81] duration metric: took 3.792078ms waiting for pod "kube-scheduler-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:15.409061   29448 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:17.421198   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:19.921779   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:21.921963   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:24.419625   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:26.918715   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:28.920464   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:31.417510   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:33.421585   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:35.919309   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:37.919425   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:39.921636   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:42.419249   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:44.421280   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:46.919320   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:48.919646   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:51.419182   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:53.919801   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:55.921377   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:58.419040   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:00.420223   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:02.919098   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:05.422270   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:07.920676   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:09.921475   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:12.421183   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:14.423686   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:16.925551   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:18.926973   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:21.427812   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:23.428681   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:25.929071   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:28.428550   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:30.931471   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:33.429632   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:35.430443   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:37.431177   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:39.933430   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:42.430572   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:44.430931   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:46.434046   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:48.933873   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:51.431937   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:53.933106   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:56.432902   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:58.934520   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:01.433804   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:03.933118   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:05.934862   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:08.433670   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:10.933334   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:12.934779   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:15.433922   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:17.932737   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:19.934008   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:22.433285   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:24.933318   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:26.933678   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:29.431649   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:31.933315   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:34.433040   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:36.934014   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:39.432853   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:41.934681   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:44.432653   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:46.432803   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:48.932523   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:50.933280   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:53.433228   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:55.933454   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:57.933536   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:59.933863   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:02.432410   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:04.435185   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:06.435392   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:08.934115   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:10.934791   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:13.434833   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:15.934016   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:18.431815   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:20.434177   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:22.932827   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:24.934570   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:27.432774   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:29.433296   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:31.933145   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:33.934274   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:36.433856   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:38.934657   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:41.432415   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:43.433968   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:45.932441   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:48.432763   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:50.932538   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:53.433453   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:55.932367   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:58.431936   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:12:00.432217   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:12:02.934031   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:12:05.432003   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:12:07.433884   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:12:09.931151   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:12:11.936056   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:12:14.432496   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:12:15.425934   29448 pod_ready.go:81] duration metric: took 4m0.004313648s waiting for pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace to be "Ready" ...
	E0601 12:12:15.425951   29448 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0601 12:12:15.425962   29448 pod_ready.go:38] duration metric: took 4m13.549626809s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 12:12:15.426033   29448 kubeadm.go:630] restartCluster took 4m23.916033446s
	W0601 12:12:15.426108   29448 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0601 12:12:15.426126   29448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 12:12:53.838824   29448 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (38.412954319s)
	I0601 12:12:53.838885   29448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 12:12:53.848687   29448 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 12:12:53.856102   29448 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 12:12:53.856144   29448 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 12:12:53.863490   29448 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 12:12:53.863513   29448 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 12:12:54.346970   29448 out.go:204]   - Generating certificates and keys ...
	I0601 12:12:55.129022   29448 out.go:204]   - Booting up control plane ...
	I0601 12:13:02.175768   29448 out.go:204]   - Configuring RBAC rules ...
	I0601 12:13:02.551374   29448 cni.go:95] Creating CNI manager for ""
	I0601 12:13:02.551386   29448 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 12:13:02.551404   29448 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 12:13:02.551496   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=af273d6c1d2efba123f39c341ef4e1b2746b42f1 minikube.k8s.io/name=default-k8s-different-port-20220601120641-16804 minikube.k8s.io/updated_at=2022_06_01T12_13_02_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:02.551495   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:02.680529   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:02.708050   29448 ops.go:34] apiserver oom_adj: -16
	I0601 12:13:03.312490   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:03.813877   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:04.312332   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:04.812303   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:05.312415   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:05.812342   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:06.312586   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:06.812404   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:07.313283   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:07.812436   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:08.313764   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:08.812377   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:09.312757   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:09.812425   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:10.312805   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:10.812408   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:11.312968   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:11.813718   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:12.313107   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:12.813057   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:13.312359   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:13.814145   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:14.313263   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:14.813634   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:15.312838   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:15.813633   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:15.865799   29448 kubeadm.go:1045] duration metric: took 13.314451092s to wait for elevateKubeSystemPrivileges.
	I0601 12:13:15.865815   29448 kubeadm.go:397] StartCluster complete in 5m24.394950441s
	I0601 12:13:15.865834   29448 settings.go:142] acquiring lock: {Name:mk630944d7da2d6f5ad8bc7bd2a815aad6529f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 12:13:15.865914   29448 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 12:13:15.866468   29448 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk924f4ba24fa74a0cb052299e0cc4e825b209a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 12:13:16.381614   29448 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220601120641-16804" rescaled to 1
	I0601 12:13:16.381652   29448 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 12:13:16.381667   29448 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 12:13:16.381684   29448 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0601 12:13:16.402996   29448 out.go:177] * Verifying Kubernetes components...
	I0601 12:13:16.381820   29448 config.go:178] Loaded profile config "default-k8s-different-port-20220601120641-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 12:13:16.403082   29448 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220601120641-16804"
	I0601 12:13:16.403082   29448 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220601120641-16804"
	I0601 12:13:16.403090   29448 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220601120641-16804"
	I0601 12:13:16.403091   29448 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220601120641-16804"
	I0601 12:13:16.436014   29448 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 12:13:16.444972   29448 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220601120641-16804"
	I0601 12:13:16.444984   29448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	W0601 12:13:16.444988   29448 addons.go:165] addon storage-provisioner should already be in state true
	I0601 12:13:16.444984   29448 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220601120641-16804"
	W0601 12:13:16.445001   29448 addons.go:165] addon metrics-server should already be in state true
	I0601 12:13:16.444972   29448 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220601120641-16804"
	I0601 12:13:16.445009   29448 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220601120641-16804"
	W0601 12:13:16.445028   29448 addons.go:165] addon dashboard should already be in state true
	I0601 12:13:16.445041   29448 host.go:66] Checking if "default-k8s-different-port-20220601120641-16804" exists ...
	I0601 12:13:16.445047   29448 host.go:66] Checking if "default-k8s-different-port-20220601120641-16804" exists ...
	I0601 12:13:16.445080   29448 host.go:66] Checking if "default-k8s-different-port-20220601120641-16804" exists ...
	I0601 12:13:16.445392   29448 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601120641-16804 --format={{.State.Status}}
	I0601 12:13:16.445546   29448 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601120641-16804 --format={{.State.Status}}
	I0601 12:13:16.446199   29448 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601120641-16804 --format={{.State.Status}}
	I0601 12:13:16.446648   29448 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601120641-16804 --format={{.State.Status}}
	I0601 12:13:16.610210   29448 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 12:13:16.566570   29448 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220601120641-16804"
	W0601 12:13:16.610254   29448 addons.go:165] addon default-storageclass should already be in state true
	I0601 12:13:16.589649   29448 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0601 12:13:16.610304   29448 host.go:66] Checking if "default-k8s-different-port-20220601120641-16804" exists ...
	I0601 12:13:16.631634   29448 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 12:13:16.632003   29448 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601120641-16804 --format={{.State.Status}}
	I0601 12:13:16.673198   29448 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0601 12:13:16.652392   29448 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 12:13:16.652447   29448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 12:13:16.694686   29448 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 12:13:16.715392   29448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 12:13:16.694724   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:13:16.715482   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:13:16.715479   29448 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 12:13:16.715499   29448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 12:13:16.715581   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:13:16.768591   29448 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 12:13:16.768609   29448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 12:13:16.768738   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:13:16.815597   29448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61977 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601120641-16804/id_rsa Username:docker}
	I0601 12:13:16.816722   29448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61977 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601120641-16804/id_rsa Username:docker}
	I0601 12:13:16.819813   29448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61977 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601120641-16804/id_rsa Username:docker}
	I0601 12:13:16.860299   29448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61977 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601120641-16804/id_rsa Username:docker}
	I0601 12:13:16.918181   29448 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 12:13:16.918199   29448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 12:13:16.918346   29448 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 12:13:16.918353   29448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 12:13:16.923461   29448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 12:13:16.955845   29448 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 12:13:16.955861   29448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 12:13:16.960053   29448 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 12:13:16.960067   29448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 12:13:17.057030   29448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 12:13:17.065869   29448 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 12:13:17.065888   29448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 12:13:17.067617   29448 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 12:13:17.067631   29448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 12:13:17.092092   29448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 12:13:17.169735   29448 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 12:13:17.169747   29448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 12:13:17.259925   29448 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 12:13:17.259945   29448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 12:13:17.263456   29448 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0601 12:13:17.263653   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:13:17.285230   29448 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 12:13:17.285253   29448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 12:13:17.348872   29448 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220601120641-16804" to be "Ready" ...
	I0601 12:13:17.358350   29448 node_ready.go:49] node "default-k8s-different-port-20220601120641-16804" has status "Ready":"True"
	I0601 12:13:17.358361   29448 node_ready.go:38] duration metric: took 9.446202ms waiting for node "default-k8s-different-port-20220601120641-16804" to be "Ready" ...
	I0601 12:13:17.358367   29448 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 12:13:17.365478   29448 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-msx2w" in "kube-system" namespace to be "Ready" ...
	I0601 12:13:17.366272   29448 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 12:13:17.366299   29448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 12:13:17.461258   29448 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 12:13:17.461280   29448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 12:13:17.555859   29448 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 12:13:17.555881   29448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 12:13:17.588051   29448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 12:13:17.782740   29448 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220601120641-16804"
	I0601 12:13:18.874759   29448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.286690533s)
	I0601 12:13:18.900484   29448 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0601 12:13:18.974946   29448 addons.go:417] enableAddons completed in 2.593269166s
	I0601 12:13:19.380036   29448 pod_ready.go:102] pod "coredns-64897985d-msx2w" in "kube-system" namespace has status "Ready":"False"
	I0601 12:13:20.881620   29448 pod_ready.go:92] pod "coredns-64897985d-msx2w" in "kube-system" namespace has status "Ready":"True"
	I0601 12:13:20.881634   29448 pod_ready.go:81] duration metric: took 3.516152058s waiting for pod "coredns-64897985d-msx2w" in "kube-system" namespace to be "Ready" ...
	I0601 12:13:20.881640   29448 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:13:20.885547   29448 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:13:20.885556   29448 pod_ready.go:81] duration metric: took 3.888742ms waiting for pod "etcd-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:13:20.885564   29448 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:13:20.890003   29448 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:13:20.890014   29448 pod_ready.go:81] duration metric: took 4.436175ms waiting for pod "kube-apiserver-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:13:20.890020   29448 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:13:20.894647   29448 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:13:20.894656   29448 pod_ready.go:81] duration metric: took 4.630025ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:13:20.894664   29448 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fvfsn" in "kube-system" namespace to be "Ready" ...
	I0601 12:13:20.899454   29448 pod_ready.go:92] pod "kube-proxy-fvfsn" in "kube-system" namespace has status "Ready":"True"
	I0601 12:13:20.899464   29448 pod_ready.go:81] duration metric: took 4.795544ms waiting for pod "kube-proxy-fvfsn" in "kube-system" namespace to be "Ready" ...
	I0601 12:13:20.899469   29448 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:13:21.277424   29448 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:13:21.277435   29448 pod_ready.go:81] duration metric: took 377.962599ms waiting for pod "kube-scheduler-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:13:21.277441   29448 pod_ready.go:38] duration metric: took 3.919093022s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 12:13:21.277459   29448 api_server.go:51] waiting for apiserver process to appear ...
	I0601 12:13:21.277508   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:13:21.294121   29448 api_server.go:71] duration metric: took 4.912483051s to wait for apiserver process to appear ...
	I0601 12:13:21.294136   29448 api_server.go:87] waiting for apiserver healthz status ...
	I0601 12:13:21.294144   29448 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61981/healthz ...
	I0601 12:13:21.299450   29448 api_server.go:266] https://127.0.0.1:61981/healthz returned 200:
	ok
	I0601 12:13:21.300543   29448 api_server.go:140] control plane version: v1.23.6
	I0601 12:13:21.300552   29448 api_server.go:130] duration metric: took 6.411199ms to wait for apiserver health ...
	I0601 12:13:21.300556   29448 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 12:13:21.483407   29448 system_pods.go:59] 8 kube-system pods found
	I0601 12:13:21.483422   29448 system_pods.go:61] "coredns-64897985d-msx2w" [d1127fd7-0fe5-4d4b-9289-613de74b6bcf] Running
	I0601 12:13:21.483426   29448 system_pods.go:61] "etcd-default-k8s-different-port-20220601120641-16804" [d42fc13c-69a2-4dee-a6e8-24487a70b0ce] Running
	I0601 12:13:21.483430   29448 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220601120641-16804" [210d3b46-6934-4899-b557-a4c62d6a5f6b] Running
	I0601 12:13:21.483434   29448 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220601120641-16804" [114eba47-9745-47bb-907a-d05960c48a09] Running
	I0601 12:13:21.483450   29448 system_pods.go:61] "kube-proxy-fvfsn" [8668781c-7d87-48f7-9927-c9180d288cd2] Running
	I0601 12:13:21.483458   29448 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220601120641-16804" [8291ec9d-8928-48ec-ae6c-5daf01b958f3] Running
	I0601 12:13:21.483464   29448 system_pods.go:61] "metrics-server-b955d9d8-nq88j" [5678caec-c59f-4853-a92e-2cb2ce89b7ab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 12:13:21.483471   29448 system_pods.go:61] "storage-provisioner" [bd72ae58-e8a6-4838-abd1-fd0b5e3a6922] Running
	I0601 12:13:21.483476   29448 system_pods.go:74] duration metric: took 182.917349ms to wait for pod list to return data ...
	I0601 12:13:21.483481   29448 default_sa.go:34] waiting for default service account to be created ...
	I0601 12:13:21.678526   29448 default_sa.go:45] found service account: "default"
	I0601 12:13:21.678538   29448 default_sa.go:55] duration metric: took 195.05379ms for default service account to be created ...
	I0601 12:13:21.678544   29448 system_pods.go:116] waiting for k8s-apps to be running ...
	I0601 12:13:21.881339   29448 system_pods.go:86] 8 kube-system pods found
	I0601 12:13:21.881353   29448 system_pods.go:89] "coredns-64897985d-msx2w" [d1127fd7-0fe5-4d4b-9289-613de74b6bcf] Running
	I0601 12:13:21.881358   29448 system_pods.go:89] "etcd-default-k8s-different-port-20220601120641-16804" [d42fc13c-69a2-4dee-a6e8-24487a70b0ce] Running
	I0601 12:13:21.881362   29448 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20220601120641-16804" [210d3b46-6934-4899-b557-a4c62d6a5f6b] Running
	I0601 12:13:21.881368   29448 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20220601120641-16804" [114eba47-9745-47bb-907a-d05960c48a09] Running
	I0601 12:13:21.881393   29448 system_pods.go:89] "kube-proxy-fvfsn" [8668781c-7d87-48f7-9927-c9180d288cd2] Running
	I0601 12:13:21.881398   29448 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20220601120641-16804" [8291ec9d-8928-48ec-ae6c-5daf01b958f3] Running
	I0601 12:13:21.881405   29448 system_pods.go:89] "metrics-server-b955d9d8-nq88j" [5678caec-c59f-4853-a92e-2cb2ce89b7ab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 12:13:21.881410   29448 system_pods.go:89] "storage-provisioner" [bd72ae58-e8a6-4838-abd1-fd0b5e3a6922] Running
	I0601 12:13:21.881415   29448 system_pods.go:126] duration metric: took 202.869164ms to wait for k8s-apps to be running ...
	I0601 12:13:21.881433   29448 system_svc.go:44] waiting for kubelet service to be running ....
	I0601 12:13:21.881538   29448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 12:13:21.892314   29448 system_svc.go:56] duration metric: took 10.876936ms WaitForService to wait for kubelet.
	I0601 12:13:21.892326   29448 kubeadm.go:572] duration metric: took 5.510696085s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0601 12:13:21.892342   29448 node_conditions.go:102] verifying NodePressure condition ...
	I0601 12:13:22.078709   29448 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 12:13:22.078722   29448 node_conditions.go:123] node cpu capacity is 6
	I0601 12:13:22.078732   29448 node_conditions.go:105] duration metric: took 186.387199ms to run NodePressure ...
	I0601 12:13:22.078741   29448 start.go:213] waiting for startup goroutines ...
	I0601 12:13:22.114593   29448 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0601 12:13:22.136695   29448 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20220601120641-16804" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-01 19:07:48 UTC, end at Wed 2022-06-01 19:14:14 UTC. --
	Jun 01 19:12:21 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:12:21.829594044Z" level=info msg="ignoring event" container=ad462135e44548f0e01ab04ea9c7c6e79b0dca6139dc95f97281399a1e71251f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:12:31 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:12:31.895631481Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=9c3c7d4b3c3cffb50c69ad11a23f4fb40e316f9fba8c76a581ceee747fec6052
	Jun 01 19:12:31 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:12:31.921205720Z" level=info msg="ignoring event" container=9c3c7d4b3c3cffb50c69ad11a23f4fb40e316f9fba8c76a581ceee747fec6052 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:12:32 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:12:32.033766172Z" level=info msg="ignoring event" container=b8dd105e1b1e1493380d715967f78220bb233905aa22bcb00063f4ed3190031f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:12:32 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:12:32.207012137Z" level=info msg="ignoring event" container=b47b7c3c990ac37ab1c974069fa7c0a54631dfeec768b91a12760591905c4fdc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:12:42 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:12:42.293644666Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=d8035d66f8c3e8511bbdb76253b2965fe76e0e3401c8ecda3ad06842ac006c6a
	Jun 01 19:12:42 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:12:42.323140472Z" level=info msg="ignoring event" container=d8035d66f8c3e8511bbdb76253b2965fe76e0e3401c8ecda3ad06842ac006c6a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:12:42 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:12:42.438331519Z" level=info msg="ignoring event" container=a242c73eb08036ce5365f39788336cca723b66cb88c8b98804f4cc89803aeccb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:12:52 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:12:52.530460784Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=56d339bec0c7859716007957685ce5f4da5d5481608f48916a5c5d85c97064a1
	Jun 01 19:12:52 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:12:52.588547008Z" level=info msg="ignoring event" container=56d339bec0c7859716007957685ce5f4da5d5481608f48916a5c5d85c97064a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:12:52 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:12:52.687376191Z" level=info msg="ignoring event" container=b4f69b7ef98c89c2f1abb4d8550d04e660673ab0f8576c1c9182e545cb96b834 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:12:52 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:12:52.792503094Z" level=info msg="ignoring event" container=50056c971d28ac5f25188074831450a3b7daef25534632090b1c607d35e2632f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:12:52 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:12:52.890336901Z" level=info msg="ignoring event" container=ade63967b73a38bc798bb21b9699e74b796ea09a262ef225722416aa0b253b6e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:12:52 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:12:52.998508161Z" level=info msg="ignoring event" container=cda462e6ac46846080fb472005d04b746b9fda633abc3bcde025e5e81986a08e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:13:18 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:13:18.603734442Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 19:13:18 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:13:18.603850775Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 19:13:18 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:13:18.605429270Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 19:13:19 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:13:19.600603460Z" level=warning msg="reference for unknown type: " digest="sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2" remote="docker.io/kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2"
	Jun 01 19:13:25 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:13:25.588586601Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jun 01 19:13:25 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:13:25.824339442Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jun 01 19:13:29 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:13:29.190779059Z" level=info msg="ignoring event" container=4c9dde62933d7d3f2b1c265a3fc34ccb8fdf3d8984cfeb86046f2bf78773f16b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:13:29 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:13:29.537840971Z" level=info msg="ignoring event" container=5515926261d1caee0ed56717de9596057e6d0bd500dadd18ab9cae7fc5da31e5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:13:30 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:13:30.711233573Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 19:13:30 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:13:30.711277334Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 19:13:30 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:13:30.713541612Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	5515926261d1c       a90209bb39e3d                                                                                    45 seconds ago       Exited              dashboard-metrics-scraper   1                   a680a4a852272
	28902146038fb       kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2   49 seconds ago       Running             kubernetes-dashboard        0                   c8a49466c271d
	3083367c393b6       6e38f40d628db                                                                                    56 seconds ago       Running             storage-provisioner         0                   f9fbc3c6357ca
	0bb1fb7bc57bc       a4ca41631cc7a                                                                                    56 seconds ago       Running             coredns                     0                   ea0ce907391e0
	d6dfd2d806a99       4c03754524064                                                                                    58 seconds ago       Running             kube-proxy                  0                   a240131bf5b20
	424724c7b9917       595f327f224a4                                                                                    About a minute ago   Running             kube-scheduler              2                   8ebeaff1c6482
	52b11882ee23c       df7b72818ad2e                                                                                    About a minute ago   Running             kube-controller-manager     2                   4e8df84449d5b
	1daae67186790       8fa62c12256df                                                                                    About a minute ago   Running             kube-apiserver              2                   1444181cedc4d
	eb2d9ee592387       25f8c7f3da61c                                                                                    About a minute ago   Running             etcd                        2                   aa6ef812edbec
	
	* 
	* ==> coredns [0bb1fb7bc57b] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220601120641-16804
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220601120641-16804
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af273d6c1d2efba123f39c341ef4e1b2746b42f1
	                    minikube.k8s.io/name=default-k8s-different-port-20220601120641-16804
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T12_13_02_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 19:12:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220601120641-16804
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Jun 2022 19:14:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 19:14:12 +0000   Wed, 01 Jun 2022 19:12:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 19:14:12 +0000   Wed, 01 Jun 2022 19:12:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 19:14:12 +0000   Wed, 01 Jun 2022 19:12:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Jun 2022 19:14:12 +0000   Wed, 01 Jun 2022 19:14:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    default-k8s-different-port-20220601120641-16804
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	  System UUID:                dc4ced92-338f-4232-be6a-2c10371b9ac6
	  Boot ID:                    60fb2c64-72ec-41ec-9cdf-c18d3fde7c60
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-msx2w                                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     59s
	  kube-system                 etcd-default-k8s-different-port-20220601120641-16804                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         73s
	  kube-system                 kube-apiserver-default-k8s-different-port-20220601120641-16804             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220601120641-16804    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-proxy-fvfsn                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 kube-scheduler-default-k8s-different-port-20220601120641-16804             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 metrics-server-b955d9d8-nq88j                                              100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         57s
	  kube-system                 storage-provisioner                                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-zpjjc                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	  kubernetes-dashboard        kubernetes-dashboard-8469778f77-rhjjp                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 58s                kube-proxy  
	  Normal  NodeHasSufficientMemory  79s (x3 over 79s)  kubelet     Node default-k8s-different-port-20220601120641-16804 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    79s (x3 over 79s)  kubelet     Node default-k8s-different-port-20220601120641-16804 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     79s (x3 over 79s)  kubelet     Node default-k8s-different-port-20220601120641-16804 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  79s                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 79s                kubelet     Starting kubelet.
	  Normal  Starting                 72s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  72s                kubelet     Node default-k8s-different-port-20220601120641-16804 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    72s                kubelet     Node default-k8s-different-port-20220601120641-16804 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     72s                kubelet     Node default-k8s-different-port-20220601120641-16804 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  72s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                61s                kubelet     Node default-k8s-different-port-20220601120641-16804 status is now: NodeReady
	  Normal  Starting                 2s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  2s                 kubelet     Node default-k8s-different-port-20220601120641-16804 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2s                 kubelet     Node default-k8s-different-port-20220601120641-16804 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2s                 kubelet     Node default-k8s-different-port-20220601120641-16804 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2s                 kubelet     Node default-k8s-different-port-20220601120641-16804 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                2s                 kubelet     Node default-k8s-different-port-20220601120641-16804 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [eb2d9ee59238] <==
	* {"level":"info","ts":"2022-06-01T19:12:56.791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2022-06-01T19:12:56.791Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-01T19:12:56.791Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2022-06-01T19:12:56.791Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-01T19:12:56.792Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-01T19:12:56.792Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-01T19:12:56.792Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-01T19:12:57.529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2022-06-01T19:12:57.530Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-01T19:12:57.530Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2022-06-01T19:12:57.530Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2022-06-01T19:12:57.530Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-01T19:12:57.530Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2022-06-01T19:12:57.530Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-01T19:12:57.530Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T19:12:57.531Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T19:12:57.531Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T19:12:57.531Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T19:12:57.531Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:default-k8s-different-port-20220601120641-16804 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T19:12:57.531Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T19:12:57.531Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T19:12:57.532Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-01T19:12:57.533Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T19:12:57.533Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T19:12:57.540Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	
	* 
	* ==> kernel <==
	*  19:14:15 up  1:17,  0 users,  load average: 0.67, 0.61, 0.71
	Linux default-k8s-different-port-20220601120641-16804 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [1daae6718679] <==
	* I0601 19:13:00.296413       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0601 19:13:00.321796       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0601 19:13:00.324127       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0601 19:13:00.324174       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0601 19:13:00.580489       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0601 19:13:00.609847       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0601 19:13:00.727405       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0601 19:13:00.731365       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0601 19:13:00.732038       1 controller.go:611] quota admission added evaluator for: endpoints
	I0601 19:13:00.734606       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0601 19:13:01.412976       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0601 19:13:02.391010       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0601 19:13:02.398585       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0601 19:13:02.407212       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0601 19:13:02.590004       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0601 19:13:15.547040       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0601 19:13:15.659329       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0601 19:13:16.232026       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0601 19:13:17.782035       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.99.90.120]
	W0601 19:13:18.604356       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 19:13:18.604489       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 19:13:18.604496       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0601 19:13:18.808881       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.100.159.28]
	I0601 19:13:18.877862       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.111.133.107]
	
	* 
	* ==> kube-controller-manager [52b11882ee23] <==
	* I0601 19:13:16.162285       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0601 19:13:17.603203       1 event.go:294] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-b955d9d8 to 1"
	I0601 19:13:17.607392       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-b955d9d8-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0601 19:13:17.671260       1 replica_set.go:536] sync "kube-system/metrics-server-b955d9d8" failed with pods "metrics-server-b955d9d8-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0601 19:13:17.679084       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-b955d9d8-nq88j"
	I0601 19:13:18.690736       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-56974995fc to 1"
	I0601 19:13:18.698051       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8469778f77 to 1"
	I0601 19:13:18.698318       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 19:13:18.704310       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 19:13:18.707461       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0601 19:13:18.715062       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 19:13:18.715887       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 19:13:18.715946       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0601 19:13:18.723180       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 19:13:18.723271       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 19:13:18.726732       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 19:13:18.727464       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 19:13:18.768304       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 19:13:18.768572       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0601 19:13:18.768690       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 19:13:18.768685       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 19:13:18.773515       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-rhjjp"
	I0601 19:13:18.814039       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-zpjjc"
	E0601 19:14:11.731164       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 19:14:11.735698       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [d6dfd2d806a9] <==
	* I0601 19:13:16.207954       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0601 19:13:16.208028       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0601 19:13:16.208068       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 19:13:16.228419       1 server_others.go:206] "Using iptables Proxier"
	I0601 19:13:16.228489       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 19:13:16.228499       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 19:13:16.228674       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 19:13:16.229808       1 server.go:656] "Version info" version="v1.23.6"
	I0601 19:13:16.230514       1 config.go:317] "Starting service config controller"
	I0601 19:13:16.230549       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 19:13:16.230851       1 config.go:226] "Starting endpoint slice config controller"
	I0601 19:13:16.230878       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 19:13:16.331226       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0601 19:13:16.331316       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [424724c7b991] <==
	* E0601 19:12:59.395936       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0601 19:12:59.396020       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 19:12:59.396028       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0601 19:12:59.396225       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 19:12:59.396260       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0601 19:12:59.396322       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0601 19:12:59.396371       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0601 19:12:59.396586       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 19:12:59.396756       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0601 19:12:59.398008       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0601 19:12:59.398040       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0601 19:13:00.240676       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 19:13:00.240834       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0601 19:13:00.244717       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0601 19:13:00.244778       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0601 19:13:00.308534       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0601 19:13:00.308557       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0601 19:13:00.319731       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 19:13:00.319877       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0601 19:13:00.347294       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0601 19:13:00.347782       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0601 19:13:00.406035       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 19:13:00.406119       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 19:13:01.094805       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0601 19:13:03.168784       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 19:07:48 UTC, end at Wed 2022-06-01 19:14:15 UTC. --
	Jun 01 19:14:13 default-k8s-different-port-20220601120641-16804 kubelet[6911]: I0601 19:14:13.236967    6911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8668781c-7d87-48f7-9927-c9180d288cd2-xtables-lock\") pod \"kube-proxy-fvfsn\" (UID: \"8668781c-7d87-48f7-9927-c9180d288cd2\") " pod="kube-system/kube-proxy-fvfsn"
	Jun 01 19:14:13 default-k8s-different-port-20220601120641-16804 kubelet[6911]: I0601 19:14:13.237039    6911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxt87\" (UniqueName: \"kubernetes.io/projected/8668781c-7d87-48f7-9927-c9180d288cd2-kube-api-access-fxt87\") pod \"kube-proxy-fvfsn\" (UID: \"8668781c-7d87-48f7-9927-c9180d288cd2\") " pod="kube-system/kube-proxy-fvfsn"
	Jun 01 19:14:13 default-k8s-different-port-20220601120641-16804 kubelet[6911]: I0601 19:14:13.237062    6911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1127fd7-0fe5-4d4b-9289-613de74b6bcf-config-volume\") pod \"coredns-64897985d-msx2w\" (UID: \"d1127fd7-0fe5-4d4b-9289-613de74b6bcf\") " pod="kube-system/coredns-64897985d-msx2w"
	Jun 01 19:14:13 default-k8s-different-port-20220601120641-16804 kubelet[6911]: I0601 19:14:13.237079    6911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/59aefb1a-c87a-44ae-a6ff-5b1e5c03c2df-tmp-volume\") pod \"dashboard-metrics-scraper-56974995fc-zpjjc\" (UID: \"59aefb1a-c87a-44ae-a6ff-5b1e5c03c2df\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-zpjjc"
	Jun 01 19:14:13 default-k8s-different-port-20220601120641-16804 kubelet[6911]: I0601 19:14:13.237097    6911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8668781c-7d87-48f7-9927-c9180d288cd2-kube-proxy\") pod \"kube-proxy-fvfsn\" (UID: \"8668781c-7d87-48f7-9927-c9180d288cd2\") " pod="kube-system/kube-proxy-fvfsn"
	Jun 01 19:14:13 default-k8s-different-port-20220601120641-16804 kubelet[6911]: I0601 19:14:13.237112    6911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5678caec-c59f-4853-a92e-2cb2ce89b7ab-tmp-dir\") pod \"metrics-server-b955d9d8-nq88j\" (UID: \"5678caec-c59f-4853-a92e-2cb2ce89b7ab\") " pod="kube-system/metrics-server-b955d9d8-nq88j"
	Jun 01 19:14:13 default-k8s-different-port-20220601120641-16804 kubelet[6911]: I0601 19:14:13.237129    6911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xrnn\" (UniqueName: \"kubernetes.io/projected/bd72ae58-e8a6-4838-abd1-fd0b5e3a6922-kube-api-access-5xrnn\") pod \"storage-provisioner\" (UID: \"bd72ae58-e8a6-4838-abd1-fd0b5e3a6922\") " pod="kube-system/storage-provisioner"
	Jun 01 19:14:13 default-k8s-different-port-20220601120641-16804 kubelet[6911]: I0601 19:14:13.237172    6911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4338b89b-deb5-472b-b3a2-e8316af44b6a-tmp-volume\") pod \"kubernetes-dashboard-8469778f77-rhjjp\" (UID: \"4338b89b-deb5-472b-b3a2-e8316af44b6a\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-rhjjp"
	Jun 01 19:14:13 default-k8s-different-port-20220601120641-16804 kubelet[6911]: I0601 19:14:13.237187    6911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8668781c-7d87-48f7-9927-c9180d288cd2-lib-modules\") pod \"kube-proxy-fvfsn\" (UID: \"8668781c-7d87-48f7-9927-c9180d288cd2\") " pod="kube-system/kube-proxy-fvfsn"
	Jun 01 19:14:13 default-k8s-different-port-20220601120641-16804 kubelet[6911]: I0601 19:14:13.237200    6911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bd72ae58-e8a6-4838-abd1-fd0b5e3a6922-tmp\") pod \"storage-provisioner\" (UID: \"bd72ae58-e8a6-4838-abd1-fd0b5e3a6922\") " pod="kube-system/storage-provisioner"
	Jun 01 19:14:13 default-k8s-different-port-20220601120641-16804 kubelet[6911]: I0601 19:14:13.237217    6911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xdll\" (UniqueName: \"kubernetes.io/projected/4338b89b-deb5-472b-b3a2-e8316af44b6a-kube-api-access-8xdll\") pod \"kubernetes-dashboard-8469778f77-rhjjp\" (UID: \"4338b89b-deb5-472b-b3a2-e8316af44b6a\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-rhjjp"
	Jun 01 19:14:13 default-k8s-different-port-20220601120641-16804 kubelet[6911]: I0601 19:14:13.237231    6911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xxqv\" (UniqueName: \"kubernetes.io/projected/d1127fd7-0fe5-4d4b-9289-613de74b6bcf-kube-api-access-6xxqv\") pod \"coredns-64897985d-msx2w\" (UID: \"d1127fd7-0fe5-4d4b-9289-613de74b6bcf\") " pod="kube-system/coredns-64897985d-msx2w"
	Jun 01 19:14:13 default-k8s-different-port-20220601120641-16804 kubelet[6911]: I0601 19:14:13.237249    6911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7xp5\" (UniqueName: \"kubernetes.io/projected/5678caec-c59f-4853-a92e-2cb2ce89b7ab-kube-api-access-m7xp5\") pod \"metrics-server-b955d9d8-nq88j\" (UID: \"5678caec-c59f-4853-a92e-2cb2ce89b7ab\") " pod="kube-system/metrics-server-b955d9d8-nq88j"
	Jun 01 19:14:13 default-k8s-different-port-20220601120641-16804 kubelet[6911]: I0601 19:14:13.237269    6911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75qd7\" (UniqueName: \"kubernetes.io/projected/59aefb1a-c87a-44ae-a6ff-5b1e5c03c2df-kube-api-access-75qd7\") pod \"dashboard-metrics-scraper-56974995fc-zpjjc\" (UID: \"59aefb1a-c87a-44ae-a6ff-5b1e5c03c2df\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-zpjjc"
	Jun 01 19:14:13 default-k8s-different-port-20220601120641-16804 kubelet[6911]: I0601 19:14:13.237277    6911 reconciler.go:157] "Reconciler: start to sync state"
	Jun 01 19:14:14 default-k8s-different-port-20220601120641-16804 kubelet[6911]: I0601 19:14:14.409328    6911 request.go:665] Waited for 1.161741415s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8444/api/v1/namespaces/kube-system/pods
	Jun 01 19:14:14 default-k8s-different-port-20220601120641-16804 kubelet[6911]: E0601 19:14:14.453011    6911 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-scheduler-default-k8s-different-port-20220601120641-16804\" already exists" pod="kube-system/kube-scheduler-default-k8s-different-port-20220601120641-16804"
	Jun 01 19:14:14 default-k8s-different-port-20220601120641-16804 kubelet[6911]: E0601 19:14:14.617506    6911 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"etcd-default-k8s-different-port-20220601120641-16804\" already exists" pod="kube-system/etcd-default-k8s-different-port-20220601120641-16804"
	Jun 01 19:14:14 default-k8s-different-port-20220601120641-16804 kubelet[6911]: E0601 19:14:14.812918    6911 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-apiserver-default-k8s-different-port-20220601120641-16804\" already exists" pod="kube-system/kube-apiserver-default-k8s-different-port-20220601120641-16804"
	Jun 01 19:14:15 default-k8s-different-port-20220601120641-16804 kubelet[6911]: E0601 19:14:15.012981    6911 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-default-k8s-different-port-20220601120641-16804\" already exists" pod="kube-system/kube-controller-manager-default-k8s-different-port-20220601120641-16804"
	Jun 01 19:14:15 default-k8s-different-port-20220601120641-16804 kubelet[6911]: I0601 19:14:15.314370    6911 scope.go:110] "RemoveContainer" containerID="5515926261d1caee0ed56717de9596057e6d0bd500dadd18ab9cae7fc5da31e5"
	Jun 01 19:14:15 default-k8s-different-port-20220601120641-16804 kubelet[6911]: E0601 19:14:15.737459    6911 remote_image.go:216] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 01 19:14:15 default-k8s-different-port-20220601120641-16804 kubelet[6911]: E0601 19:14:15.737513    6911 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 01 19:14:15 default-k8s-different-port-20220601120641-16804 kubelet[6911]: E0601 19:14:15.737622    6911 kuberuntime_manager.go:919] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-m7xp5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Prob
eHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{}
,TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-b955d9d8-nq88j_kube-system(5678caec-c59f-4853-a92e-2cb2ce89b7ab): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jun 01 19:14:15 default-k8s-different-port-20220601120641-16804 kubelet[6911]: E0601 19:14:15.737650    6911 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-b955d9d8-nq88j" podUID=5678caec-c59f-4853-a92e-2cb2ce89b7ab
	
	* 
	* ==> kubernetes-dashboard [28902146038f] <==
	* 2022/06/01 19:13:25 Using namespace: kubernetes-dashboard
	2022/06/01 19:13:25 Using in-cluster config to connect to apiserver
	2022/06/01 19:13:25 Using secret token for csrf signing
	2022/06/01 19:13:25 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/06/01 19:13:25 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/06/01 19:13:25 Successful initial request to the apiserver, version: v1.23.6
	2022/06/01 19:13:25 Generating JWE encryption key
	2022/06/01 19:13:25 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/06/01 19:13:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/06/01 19:13:25 Initializing JWE encryption key from synchronized object
	2022/06/01 19:13:25 Creating in-cluster Sidecar client
	2022/06/01 19:13:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/01 19:13:25 Serving insecurely on HTTP port: 9090
	2022/06/01 19:14:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/01 19:13:25 Starting overwatch
	
	* 
	* ==> storage-provisioner [3083367c393b] <==
	* I0601 19:13:18.389490       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0601 19:13:18.404781       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0601 19:13:18.404858       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0601 19:13:18.473740       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0601 19:13:18.474125       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220601120641-16804_903b70af-ba0f-4f4f-bbba-2aa90ba502a3!
	I0601 19:13:18.474729       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"59ef0bc1-1c20-46e6-b4d9-0741c7d0e59f", APIVersion:"v1", ResourceVersion:"511", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20220601120641-16804_903b70af-ba0f-4f4f-bbba-2aa90ba502a3 became leader
	I0601 19:13:18.574596       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220601120641-16804_903b70af-ba0f-4f4f-bbba-2aa90ba502a3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220601120641-16804 -n default-k8s-different-port-20220601120641-16804
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220601120641-16804 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-b955d9d8-nq88j
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220601120641-16804 describe pod metrics-server-b955d9d8-nq88j
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220601120641-16804 describe pod metrics-server-b955d9d8-nq88j: exit status 1 (270.430901ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-b955d9d8-nq88j" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220601120641-16804 describe pod metrics-server-b955d9d8-nq88j: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220601120641-16804
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220601120641-16804:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4bda9ca272bbbdd9dec043dec560cebe0bf845d8c6cf657de9440077f12c6362",
	        "Created": "2022-06-01T19:06:48.165680259Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 256563,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T19:07:47.977315615Z",
	            "FinishedAt": "2022-06-01T19:07:45.986695989Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/4bda9ca272bbbdd9dec043dec560cebe0bf845d8c6cf657de9440077f12c6362/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4bda9ca272bbbdd9dec043dec560cebe0bf845d8c6cf657de9440077f12c6362/hostname",
	        "HostsPath": "/var/lib/docker/containers/4bda9ca272bbbdd9dec043dec560cebe0bf845d8c6cf657de9440077f12c6362/hosts",
	        "LogPath": "/var/lib/docker/containers/4bda9ca272bbbdd9dec043dec560cebe0bf845d8c6cf657de9440077f12c6362/4bda9ca272bbbdd9dec043dec560cebe0bf845d8c6cf657de9440077f12c6362-json.log",
	        "Name": "/default-k8s-different-port-20220601120641-16804",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220601120641-16804:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220601120641-16804",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b7b295dfc67afedd39845b9179bc3786b718d6567ab92bcfd7c61410315d8780-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b7b295dfc67afedd39845b9179bc3786b718d6567ab92bcfd7c61410315d8780/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b7b295dfc67afedd39845b9179bc3786b718d6567ab92bcfd7c61410315d8780/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b7b295dfc67afedd39845b9179bc3786b718d6567ab92bcfd7c61410315d8780/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220601120641-16804",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220601120641-16804/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220601120641-16804",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220601120641-16804",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220601120641-16804",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cedfbd388f3387f90017da6086733698d8a2f3c09529b6401e6d01e3bb16ba75",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61977"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61978"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61979"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61980"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61981"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/cedfbd388f33",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220601120641-16804": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "4bda9ca272bb",
	                        "default-k8s-different-port-20220601120641-16804"
	                    ],
	                    "NetworkID": "df289b10364773815e73fc407f32919c59e23733b1e76528cfb0d723d90782ba",
	                    "EndpointID": "8a163356dbb1c8f00a73a8e242f6a0f0f4c2e4a2c0e8539a4ff438ce129b077d",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220601120641-16804 -n default-k8s-different-port-20220601120641-16804
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-different-port-20220601120641-16804 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p default-k8s-different-port-20220601120641-16804 logs -n 25: (2.751149153s)
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |                     Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p                                                | no-preload-20220601115057-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:58 PDT |
	|         | no-preload-20220601115057-16804                   |                                                 |         |                |                     |                     |
	| start   | -p                                                | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:58 PDT | 01 Jun 22 11:59 PDT |
	|         | embed-certs-20220601115855-16804                  |                                                 |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |                |                     |                     |
	|         | --wait=true --embed-certs                         |                                                 |         |                |                     |                     |
	|         | --driver=docker                                   |                                                 |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                 |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:59 PDT | 01 Jun 22 11:59 PDT |
	|         | embed-certs-20220601115855-16804                  |                                                 |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |                |                     |                     |
	| stop    | -p                                                | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:59 PDT | 01 Jun 22 11:59 PDT |
	|         | embed-certs-20220601115855-16804                  |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |                |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:59 PDT | 01 Jun 22 11:59 PDT |
	|         | embed-certs-20220601115855-16804                  |                                                 |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |                |                     |                     |
	| logs    | old-k8s-version-20220601114806-16804              | old-k8s-version-20220601114806-16804            | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:01 PDT | 01 Jun 22 12:02 PDT |
	|         | logs -n 25                                        |                                                 |         |                |                     |                     |
	| start   | -p                                                | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:59 PDT | 01 Jun 22 12:05 PDT |
	|         | embed-certs-20220601115855-16804                  |                                                 |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |                |                     |                     |
	|         | --wait=true --embed-certs                         |                                                 |         |                |                     |                     |
	|         | --driver=docker                                   |                                                 |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                 |         |                |                     |                     |
	| ssh     | -p                                                | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:05 PDT | 01 Jun 22 12:05 PDT |
	|         | embed-certs-20220601115855-16804                  |                                                 |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |                |                     |                     |
	| pause   | -p                                                | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:05 PDT | 01 Jun 22 12:05 PDT |
	|         | embed-certs-20220601115855-16804                  |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |                |                     |                     |
	| unpause | -p                                                | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:06 PDT |
	|         | embed-certs-20220601115855-16804                  |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |                |                     |                     |
	| logs    | embed-certs-20220601115855-16804                  | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:06 PDT |
	|         | logs -n 25                                        |                                                 |         |                |                     |                     |
	| logs    | embed-certs-20220601115855-16804                  | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:06 PDT |
	|         | logs -n 25                                        |                                                 |         |                |                     |                     |
	| delete  | -p                                                | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:06 PDT |
	|         | embed-certs-20220601115855-16804                  |                                                 |         |                |                     |                     |
	| delete  | -p                                                | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:06 PDT |
	|         | embed-certs-20220601115855-16804                  |                                                 |         |                |                     |                     |
	| delete  | -p                                                | disable-driver-mounts-20220601120640-16804      | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:06 PDT |
	|         | disable-driver-mounts-20220601120640-16804        |                                                 |         |                |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:07 PDT |
	|         | default-k8s-different-port-20220601120641-16804   |                                                 |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                 |         |                |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:07 PDT | 01 Jun 22 12:07 PDT |
	|         | default-k8s-different-port-20220601120641-16804   |                                                 |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |                |                     |                     |
	| stop    | -p                                                | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:07 PDT | 01 Jun 22 12:07 PDT |
	|         | default-k8s-different-port-20220601120641-16804   |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |                |                     |                     |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:07 PDT | 01 Jun 22 12:07 PDT |
	|         | default-k8s-different-port-20220601120641-16804   |                                                 |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |                |                     |                     |
	| logs    | old-k8s-version-20220601114806-16804              | old-k8s-version-20220601114806-16804            | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:11 PDT | 01 Jun 22 12:11 PDT |
	|         | logs -n 25                                        |                                                 |         |                |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:07 PDT | 01 Jun 22 12:13 PDT |
	|         | default-k8s-different-port-20220601120641-16804   |                                                 |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                      |                                                 |         |                |                     |                     |
	| ssh     | -p                                                | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:13 PDT | 01 Jun 22 12:13 PDT |
	|         | default-k8s-different-port-20220601120641-16804   |                                                 |         |                |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |                |                     |                     |
	| pause   | -p                                                | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:13 PDT | 01 Jun 22 12:13 PDT |
	|         | default-k8s-different-port-20220601120641-16804   |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |                |                     |                     |
	| unpause | -p                                                | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:14 PDT | 01 Jun 22 12:14 PDT |
	|         | default-k8s-different-port-20220601120641-16804   |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601120641-16804   | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:14 PDT | 01 Jun 22 12:14 PDT |
	|         | logs -n 25                                        |                                                 |         |                |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 12:07:46
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 12:07:46.714016   29448 out.go:296] Setting OutFile to fd 1 ...
	I0601 12:07:46.714188   29448 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 12:07:46.714193   29448 out.go:309] Setting ErrFile to fd 2...
	I0601 12:07:46.714197   29448 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 12:07:46.714298   29448 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 12:07:46.714566   29448 out.go:303] Setting JSON to false
	I0601 12:07:46.729641   29448 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":9436,"bootTime":1654101030,"procs":353,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 12:07:46.729740   29448 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 12:07:46.752140   29448 out.go:177] * [default-k8s-different-port-20220601120641-16804] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 12:07:46.795651   29448 notify.go:193] Checking for updates...
	I0601 12:07:46.817606   29448 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 12:07:46.839623   29448 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 12:07:46.860438   29448 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 12:07:46.881832   29448 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 12:07:46.903780   29448 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 12:07:46.926112   29448 config.go:178] Loaded profile config "default-k8s-different-port-20220601120641-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 12:07:46.926797   29448 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 12:07:47.000143   29448 docker.go:137] docker version: linux-20.10.14
	I0601 12:07:47.000297   29448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 12:07:47.131409   29448 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 19:07:47.070748627 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 12:07:47.153289   29448 out.go:177] * Using the docker driver based on existing profile
	I0601 12:07:47.173957   29448 start.go:284] selected driver: docker
	I0601 12:07:47.173973   29448 start.go:806] validating driver "docker" against &{Name:default-k8s-different-port-20220601120641-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port
-20220601120641-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:tru
e] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 12:07:47.174080   29448 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 12:07:47.176304   29448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 12:07:47.306114   29448 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 19:07:47.248401536 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 12:07:47.306271   29448 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0601 12:07:47.306288   29448 cni.go:95] Creating CNI manager for ""
	I0601 12:07:47.306295   29448 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 12:07:47.306302   29448 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220601120641-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601120641-16804 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 12:07:47.350023   29448 out.go:177] * Starting control plane node default-k8s-different-port-20220601120641-16804 in cluster default-k8s-different-port-20220601120641-16804
	I0601 12:07:47.372290   29448 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 12:07:47.393752   29448 out.go:177] * Pulling base image ...
	I0601 12:07:47.437193   29448 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 12:07:47.437222   29448 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 12:07:47.437287   29448 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 12:07:47.437322   29448 cache.go:57] Caching tarball of preloaded images
	I0601 12:07:47.437521   29448 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 12:07:47.437544   29448 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 12:07:47.438529   29448 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/config.json ...
	I0601 12:07:47.502152   29448 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 12:07:47.502172   29448 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 12:07:47.502180   29448 cache.go:206] Successfully downloaded all kic artifacts
	I0601 12:07:47.502221   29448 start.go:352] acquiring machines lock for default-k8s-different-port-20220601120641-16804: {Name:mk5000a48e15938a8ff193f7b1e0ef0205ca69c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 12:07:47.502307   29448 start.go:356] acquired machines lock for "default-k8s-different-port-20220601120641-16804" in 54.718µs
	I0601 12:07:47.502327   29448 start.go:94] Skipping create...Using existing machine configuration
	I0601 12:07:47.502337   29448 fix.go:55] fixHost starting: 
	I0601 12:07:47.502581   29448 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601120641-16804 --format={{.State.Status}}
	I0601 12:07:47.570243   29448 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220601120641-16804: state=Stopped err=<nil>
	W0601 12:07:47.570270   29448 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 12:07:47.592778   29448 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220601120641-16804" ...
	I0601 12:07:47.614873   29448 cli_runner.go:164] Run: docker start default-k8s-different-port-20220601120641-16804
	I0601 12:07:47.973167   29448 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601120641-16804 --format={{.State.Status}}
	I0601 12:07:48.048677   29448 kic.go:416] container "default-k8s-different-port-20220601120641-16804" state is running.
	I0601 12:07:48.049618   29448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601120641-16804
	I0601 12:07:48.132914   29448 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/config.json ...
	I0601 12:07:48.133339   29448 machine.go:88] provisioning docker machine ...
	I0601 12:07:48.133364   29448 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220601120641-16804"
	I0601 12:07:48.133419   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:48.212122   29448 main.go:134] libmachine: Using SSH client type: native
	I0601 12:07:48.212345   29448 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 61977 <nil> <nil>}
	I0601 12:07:48.212357   29448 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220601120641-16804 && echo "default-k8s-different-port-20220601120641-16804" | sudo tee /etc/hostname
	I0601 12:07:48.344170   29448 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220601120641-16804
	
	I0601 12:07:48.344259   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:48.422969   29448 main.go:134] libmachine: Using SSH client type: native
	I0601 12:07:48.423135   29448 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 61977 <nil> <nil>}
	I0601 12:07:48.423162   29448 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220601120641-16804' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220601120641-16804/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220601120641-16804' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 12:07:48.544579   29448 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 12:07:48.544600   29448 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 12:07:48.544627   29448 ubuntu.go:177] setting up certificates
	I0601 12:07:48.544647   29448 provision.go:83] configureAuth start
	I0601 12:07:48.544718   29448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601120641-16804
	I0601 12:07:48.622717   29448 provision.go:138] copyHostCerts
	I0601 12:07:48.622832   29448 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 12:07:48.622842   29448 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 12:07:48.622937   29448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 12:07:48.623147   29448 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 12:07:48.623156   29448 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 12:07:48.623223   29448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 12:07:48.623375   29448 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 12:07:48.623383   29448 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 12:07:48.623455   29448 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1675 bytes)
	I0601 12:07:48.623608   29448 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220601120641-16804 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220601120641-16804]
	I0601 12:07:48.807397   29448 provision.go:172] copyRemoteCerts
	I0601 12:07:48.807465   29448 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 12:07:48.807513   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:48.880166   29448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61977 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601120641-16804/id_rsa Username:docker}
	I0601 12:07:48.968528   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 12:07:48.985675   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0601 12:07:49.003100   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0601 12:07:49.020622   29448 provision.go:86] duration metric: configureAuth took 475.965498ms
	I0601 12:07:49.020634   29448 ubuntu.go:193] setting minikube options for container-runtime
	I0601 12:07:49.020838   29448 config.go:178] Loaded profile config "default-k8s-different-port-20220601120641-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 12:07:49.020914   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:49.093673   29448 main.go:134] libmachine: Using SSH client type: native
	I0601 12:07:49.093829   29448 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 61977 <nil> <nil>}
	I0601 12:07:49.093841   29448 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 12:07:49.210457   29448 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 12:07:49.210469   29448 ubuntu.go:71] root file system type: overlay
	I0601 12:07:49.210594   29448 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 12:07:49.210662   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:49.283147   29448 main.go:134] libmachine: Using SSH client type: native
	I0601 12:07:49.283317   29448 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 61977 <nil> <nil>}
	I0601 12:07:49.283384   29448 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 12:07:49.409302   29448 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 12:07:49.409387   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:49.484156   29448 main.go:134] libmachine: Using SSH client type: native
	I0601 12:07:49.484332   29448 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 61977 <nil> <nil>}
	I0601 12:07:49.484346   29448 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 12:07:49.604444   29448 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 12:07:49.604461   29448 machine.go:91] provisioned docker machine in 1.471129128s
	I0601 12:07:49.604471   29448 start.go:306] post-start starting for "default-k8s-different-port-20220601120641-16804" (driver="docker")
	I0601 12:07:49.604477   29448 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 12:07:49.604532   29448 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 12:07:49.604575   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:49.678684   29448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61977 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601120641-16804/id_rsa Username:docker}
	I0601 12:07:49.764315   29448 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 12:07:49.767903   29448 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 12:07:49.767938   29448 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 12:07:49.767950   29448 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 12:07:49.767956   29448 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 12:07:49.767967   29448 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 12:07:49.768069   29448 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 12:07:49.768203   29448 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem -> 168042.pem in /etc/ssl/certs
	I0601 12:07:49.768341   29448 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 12:07:49.775308   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /etc/ssl/certs/168042.pem (1708 bytes)
	I0601 12:07:49.792544   29448 start.go:309] post-start completed in 188.064447ms
	I0601 12:07:49.792632   29448 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 12:07:49.792692   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:49.865476   29448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61977 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601120641-16804/id_rsa Username:docker}
	I0601 12:07:49.948688   29448 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 12:07:49.953166   29448 fix.go:57] fixHost completed within 2.450856104s
	I0601 12:07:49.953184   29448 start.go:81] releasing machines lock for "default-k8s-different-port-20220601120641-16804", held for 2.450894668s
	I0601 12:07:49.953267   29448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220601120641-16804
	I0601 12:07:50.025599   29448 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 12:07:50.025607   29448 ssh_runner.go:195] Run: systemctl --version
	I0601 12:07:50.025662   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:50.025679   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:50.104052   29448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61977 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601120641-16804/id_rsa Username:docker}
	I0601 12:07:50.107382   29448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61977 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601120641-16804/id_rsa Username:docker}
	I0601 12:07:50.327885   29448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 12:07:50.339355   29448 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 12:07:50.349315   29448 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 12:07:50.349405   29448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 12:07:50.358883   29448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 12:07:50.372373   29448 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 12:07:50.437808   29448 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 12:07:50.507439   29448 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 12:07:50.517896   29448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 12:07:50.595253   29448 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 12:07:50.605452   29448 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 12:07:50.643710   29448 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 12:07:50.725451   29448 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0601 12:07:50.725679   29448 cli_runner.go:164] Run: docker exec -t default-k8s-different-port-20220601120641-16804 dig +short host.docker.internal
	I0601 12:07:50.872144   29448 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 12:07:50.872230   29448 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 12:07:50.877033   29448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 12:07:50.888570   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:50.963632   29448 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 12:07:50.963715   29448 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 12:07:50.995814   29448 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0601 12:07:50.995829   29448 docker.go:541] Images already preloaded, skipping extraction
	I0601 12:07:50.995925   29448 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 12:07:51.027279   29448 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0601 12:07:51.027298   29448 cache_images.go:84] Images are preloaded, skipping loading
	I0601 12:07:51.027382   29448 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 12:07:51.102384   29448 cni.go:95] Creating CNI manager for ""
	I0601 12:07:51.102395   29448 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 12:07:51.102413   29448 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 12:07:51.102444   29448 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8444 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220601120641-16804 NodeName:default-k8s-different-port-20220601120641-16804 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 Cgro
upDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 12:07:51.102543   29448 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "default-k8s-different-port-20220601120641-16804"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 12:07:51.102663   29448 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=default-k8s-different-port-20220601120641-16804 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601120641-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0601 12:07:51.102752   29448 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 12:07:51.110604   29448 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 12:07:51.110649   29448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 12:07:51.117796   29448 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0601 12:07:51.130564   29448 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 12:07:51.142900   29448 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2068 bytes)
	I0601 12:07:51.156672   29448 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0601 12:07:51.160712   29448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 12:07:51.170404   29448 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804 for IP: 192.168.58.2
	I0601 12:07:51.170523   29448 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 12:07:51.170574   29448 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 12:07:51.170655   29448 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/client.key
	I0601 12:07:51.170735   29448 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/apiserver.key.cee25041
	I0601 12:07:51.170798   29448 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/proxy-client.key
	I0601 12:07:51.170999   29448 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem (1338 bytes)
	W0601 12:07:51.171039   29448 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804_empty.pem, impossibly tiny 0 bytes
	I0601 12:07:51.171051   29448 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1675 bytes)
	I0601 12:07:51.171085   29448 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 12:07:51.171121   29448 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 12:07:51.171151   29448 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1675 bytes)
	I0601 12:07:51.171217   29448 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem (1708 bytes)
	I0601 12:07:51.171773   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 12:07:51.189027   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 12:07:51.206439   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 12:07:51.223833   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601120641-16804/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0601 12:07:51.241255   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 12:07:51.258545   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 12:07:51.275931   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 12:07:51.293213   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0601 12:07:51.310440   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /usr/share/ca-certificates/168042.pem (1708 bytes)
	I0601 12:07:51.327345   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 12:07:51.344962   29448 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem --> /usr/share/ca-certificates/16804.pem (1338 bytes)
	I0601 12:07:51.362940   29448 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 12:07:51.376186   29448 ssh_runner.go:195] Run: openssl version
	I0601 12:07:51.381980   29448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16804.pem && ln -fs /usr/share/ca-certificates/16804.pem /etc/ssl/certs/16804.pem"
	I0601 12:07:51.389866   29448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16804.pem
	I0601 12:07:51.393905   29448 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 18:01 /usr/share/ca-certificates/16804.pem
	I0601 12:07:51.393948   29448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16804.pem
	I0601 12:07:51.400411   29448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16804.pem /etc/ssl/certs/51391683.0"
	I0601 12:07:51.408002   29448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168042.pem && ln -fs /usr/share/ca-certificates/168042.pem /etc/ssl/certs/168042.pem"
	I0601 12:07:51.415937   29448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168042.pem
	I0601 12:07:51.420272   29448 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 18:01 /usr/share/ca-certificates/168042.pem
	I0601 12:07:51.420316   29448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168042.pem
	I0601 12:07:51.426141   29448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168042.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 12:07:51.433640   29448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 12:07:51.442012   29448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 12:07:51.446045   29448 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0601 12:07:51.446083   29448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 12:07:51.451363   29448 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 12:07:51.459039   29448 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220601120641-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220601120641-1680
4 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 12:07:51.459130   29448 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 12:07:51.489623   29448 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 12:07:51.497725   29448 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 12:07:51.497738   29448 kubeadm.go:626] restartCluster start
	I0601 12:07:51.497782   29448 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 12:07:51.504873   29448 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:51.504936   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:07:51.581506   29448 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220601120641-16804" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 12:07:51.581714   29448 kubeconfig.go:127] "default-k8s-different-port-20220601120641-16804" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 12:07:51.582051   29448 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk924f4ba24fa74a0cb052299e0cc4e825b209a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 12:07:51.583188   29448 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 12:07:51.591318   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:51.591366   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:51.600659   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:51.802801   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:51.803000   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:51.813919   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:52.002131   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:52.002333   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:52.013050   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:52.202762   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:52.203003   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:52.214293   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:52.400742   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:52.401039   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:52.413371   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:52.602798   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:52.603014   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:52.614030   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:52.802753   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:52.802902   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:52.813453   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:53.002110   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:53.002210   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:53.012890   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:53.200734   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:53.200808   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:53.209534   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:53.402797   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:53.402935   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:53.413942   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:53.602772   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:53.602954   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:53.614236   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:53.802625   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:53.802807   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:53.813315   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:54.000805   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:54.000963   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:54.011753   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:54.201071   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:54.201206   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:54.210732   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:54.401125   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:54.401238   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:54.411188   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:54.601290   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:54.601393   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:54.611951   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:54.611961   29448 api_server.go:165] Checking apiserver status ...
	I0601 12:07:54.612012   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:07:54.620879   29448 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:54.620892   29448 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 12:07:54.620904   29448 kubeadm.go:1092] stopping kube-system containers ...
	I0601 12:07:54.620958   29448 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 12:07:54.654891   29448 docker.go:442] Stopping containers: [7328817f3bb4 d3f44f8f8e39 134c635592c8 46d8169c54fd a771108a72ba a3f49451d3a0 607c9ad659d0 8a911f22f085 e379e0b74a15 d25d7a042066 b1e1d206888c 93f762382a29 715955d40c64 a75eb9d31e2c b1116ac2ed18 30914a4918f1]
	I0601 12:07:54.654963   29448 ssh_runner.go:195] Run: docker stop 7328817f3bb4 d3f44f8f8e39 134c635592c8 46d8169c54fd a771108a72ba a3f49451d3a0 607c9ad659d0 8a911f22f085 e379e0b74a15 d25d7a042066 b1e1d206888c 93f762382a29 715955d40c64 a75eb9d31e2c b1116ac2ed18 30914a4918f1
	I0601 12:07:54.686689   29448 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 12:07:54.699901   29448 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 12:07:54.707795   29448 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun  1 19:06 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jun  1 19:06 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Jun  1 19:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jun  1 19:06 /etc/kubernetes/scheduler.conf
	
	I0601 12:07:54.707845   29448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0601 12:07:54.716136   29448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0601 12:07:54.724080   29448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0601 12:07:54.731538   29448 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:54.731581   29448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0601 12:07:54.738546   29448 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0601 12:07:54.745577   29448 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:07:54.745680   29448 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0601 12:07:54.752549   29448 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 12:07:54.759719   29448 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 12:07:54.759733   29448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:07:54.804484   29448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:07:55.824635   29448 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.020145427s)
	I0601 12:07:55.824694   29448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:07:55.951077   29448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:07:56.004952   29448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:07:56.056152   29448 api_server.go:51] waiting for apiserver process to appear ...
	I0601 12:07:56.056230   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:07:56.577348   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:07:57.077226   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:07:57.577374   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:07:57.591888   29448 api_server.go:71] duration metric: took 1.535763125s to wait for apiserver process to appear ...
	I0601 12:07:57.591909   29448 api_server.go:87] waiting for apiserver healthz status ...
	I0601 12:07:57.591919   29448 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61981/healthz ...
	I0601 12:08:00.189999   29448 api_server.go:266] https://127.0.0.1:61981/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0601 12:08:00.190016   29448 api_server.go:102] status: https://127.0.0.1:61981/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0601 12:08:00.691046   29448 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61981/healthz ...
	I0601 12:08:00.696359   29448 api_server.go:266] https://127.0.0.1:61981/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 12:08:00.696371   29448 api_server.go:102] status: https://127.0.0.1:61981/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 12:08:01.190364   29448 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61981/healthz ...
	I0601 12:08:01.197216   29448 api_server.go:266] https://127.0.0.1:61981/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 12:08:01.197234   29448 api_server.go:102] status: https://127.0.0.1:61981/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 12:08:01.692213   29448 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61981/healthz ...
	I0601 12:08:01.699656   29448 api_server.go:266] https://127.0.0.1:61981/healthz returned 200:
	ok
	I0601 12:08:01.706073   29448 api_server.go:140] control plane version: v1.23.6
	I0601 12:08:01.706084   29448 api_server.go:130] duration metric: took 4.11422006s to wait for apiserver health ...
	I0601 12:08:01.706092   29448 cni.go:95] Creating CNI manager for ""
	I0601 12:08:01.706097   29448 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 12:08:01.706108   29448 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 12:08:01.712337   29448 system_pods.go:59] 8 kube-system pods found
	I0601 12:08:01.712354   29448 system_pods.go:61] "coredns-64897985d-v5l86" [cebeba0e-d16c-4439-973e-3ddc9003cc40] Running
	I0601 12:08:01.712358   29448 system_pods.go:61] "etcd-default-k8s-different-port-20220601120641-16804" [c387f857-e5ff-45bd-b88c-09e06c1626b3] Running
	I0601 12:08:01.712366   29448 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220601120641-16804" [b256af8c-900c-49b6-b749-7d33ef7179e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0601 12:08:01.712376   29448 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220601120641-16804" [4dbe125a-f3ba-4200-85cb-744388b849ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0601 12:08:01.712381   29448 system_pods.go:61] "kube-proxy-7kqlg" [c5fea19e-e60f-4b90-b2e0-76618c2b78cc] Running
	I0601 12:08:01.712387   29448 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220601120641-16804" [cde39bae-3f41-4858-a543-60f81bff3509] Running
	I0601 12:08:01.712391   29448 system_pods.go:61] "metrics-server-b955d9d8-48tdv" [0c245d32-4061-4d02-b798-d0766b893fc6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 12:08:01.712395   29448 system_pods.go:61] "storage-provisioner" [e70fe26d-b8cb-4d3d-8e22-76d353fcb4c8] Running
	I0601 12:08:01.712399   29448 system_pods.go:74] duration metric: took 6.286581ms to wait for pod list to return data ...
	I0601 12:08:01.712405   29448 node_conditions.go:102] verifying NodePressure condition ...
	I0601 12:08:01.715083   29448 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 12:08:01.735735   29448 node_conditions.go:123] node cpu capacity is 6
	I0601 12:08:01.735751   29448 node_conditions.go:105] duration metric: took 23.342838ms to run NodePressure ...
	I0601 12:08:01.735781   29448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:08:01.859703   29448 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0601 12:08:01.863929   29448 kubeadm.go:777] kubelet initialised
	I0601 12:08:01.863940   29448 kubeadm.go:778] duration metric: took 4.22226ms waiting for restarted kubelet to initialise ...
	I0601 12:08:01.863948   29448 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 12:08:01.874140   29448 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-v5l86" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:01.878366   29448 pod_ready.go:92] pod "coredns-64897985d-v5l86" in "kube-system" namespace has status "Ready":"True"
	I0601 12:08:01.878375   29448 pod_ready.go:81] duration metric: took 4.22218ms waiting for pod "coredns-64897985d-v5l86" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:01.878381   29448 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:01.883193   29448 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:08:01.883207   29448 pod_ready.go:81] duration metric: took 4.820642ms waiting for pod "etcd-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:01.883218   29448 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:03.899247   29448 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:06.396930   29448 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:08.397693   29448 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:10.899832   29448 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:13.396683   29448 pod_ready.go:102] pod "kube-apiserver-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:15.397145   29448 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:08:15.397158   29448 pod_ready.go:81] duration metric: took 13.514096644s waiting for pod "kube-apiserver-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:15.397165   29448 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:15.401295   29448 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:08:15.401303   29448 pod_ready.go:81] duration metric: took 4.132737ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:15.401309   29448 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7kqlg" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:15.405245   29448 pod_ready.go:92] pod "kube-proxy-7kqlg" in "kube-system" namespace has status "Ready":"True"
	I0601 12:08:15.405253   29448 pod_ready.go:81] duration metric: took 3.9394ms waiting for pod "kube-proxy-7kqlg" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:15.405259   29448 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:15.409049   29448 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:08:15.409056   29448 pod_ready.go:81] duration metric: took 3.792078ms waiting for pod "kube-scheduler-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:15.409061   29448 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace to be "Ready" ...
	I0601 12:08:17.421198   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:19.921779   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:21.921963   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:24.419625   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:26.918715   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:28.920464   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:31.417510   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:33.421585   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:35.919309   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:37.919425   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:39.921636   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:42.419249   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:44.421280   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:46.919320   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:48.919646   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:51.419182   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:53.919801   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:55.921377   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:08:58.419040   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:00.420223   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:02.919098   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:05.422270   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:07.920676   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:09.921475   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:12.421183   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:14.423686   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:16.925551   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:18.926973   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:21.427812   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:23.428681   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:25.929071   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:28.428550   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:30.931471   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:33.429632   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:35.430443   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:37.431177   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:39.933430   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:42.430572   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:44.430931   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:46.434046   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:48.933873   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:51.431937   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:53.933106   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:56.432902   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:09:58.934520   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:01.433804   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:03.933118   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:05.934862   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:08.433670   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:10.933334   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:12.934779   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:15.433922   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:17.932737   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:19.934008   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:22.433285   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:24.933318   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:26.933678   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:29.431649   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:31.933315   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:34.433040   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:36.934014   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:39.432853   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:41.934681   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:44.432653   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:46.432803   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:48.932523   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:50.933280   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:53.433228   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:55.933454   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:57.933536   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:10:59.933863   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:02.432410   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:04.435185   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:06.435392   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:08.934115   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:10.934791   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:13.434833   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:15.934016   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:18.431815   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:20.434177   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:22.932827   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:24.934570   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:27.432774   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:29.433296   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:31.933145   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:33.934274   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:36.433856   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:38.934657   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:41.432415   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:43.433968   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:45.932441   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:48.432763   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:50.932538   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:53.433453   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:55.932367   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:11:58.431936   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:12:00.432217   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:12:02.934031   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:12:05.432003   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:12:07.433884   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:12:09.931151   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:12:11.936056   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:12:14.432496   29448 pod_ready.go:102] pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace has status "Ready":"False"
	I0601 12:12:15.425934   29448 pod_ready.go:81] duration metric: took 4m0.004313648s waiting for pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace to be "Ready" ...
	E0601 12:12:15.425951   29448 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-48tdv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0601 12:12:15.425962   29448 pod_ready.go:38] duration metric: took 4m13.549626809s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 12:12:15.426033   29448 kubeadm.go:630] restartCluster took 4m23.916033446s
	W0601 12:12:15.426108   29448 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0601 12:12:15.426126   29448 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0601 12:12:53.838824   29448 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (38.412954319s)
	I0601 12:12:53.838885   29448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 12:12:53.848687   29448 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 12:12:53.856102   29448 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0601 12:12:53.856144   29448 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 12:12:53.863490   29448 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0601 12:12:53.863513   29448 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0601 12:12:54.346970   29448 out.go:204]   - Generating certificates and keys ...
	I0601 12:12:55.129022   29448 out.go:204]   - Booting up control plane ...
	I0601 12:13:02.175768   29448 out.go:204]   - Configuring RBAC rules ...
	I0601 12:13:02.551374   29448 cni.go:95] Creating CNI manager for ""
	I0601 12:13:02.551386   29448 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 12:13:02.551404   29448 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 12:13:02.551496   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=af273d6c1d2efba123f39c341ef4e1b2746b42f1 minikube.k8s.io/name=default-k8s-different-port-20220601120641-16804 minikube.k8s.io/updated_at=2022_06_01T12_13_02_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:02.551495   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:02.680529   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:02.708050   29448 ops.go:34] apiserver oom_adj: -16
	I0601 12:13:03.312490   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:03.813877   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:04.312332   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:04.812303   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:05.312415   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:05.812342   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:06.312586   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:06.812404   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:07.313283   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:07.812436   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:08.313764   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:08.812377   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:09.312757   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:09.812425   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:10.312805   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:10.812408   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:11.312968   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:11.813718   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:12.313107   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:12.813057   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:13.312359   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:13.814145   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:14.313263   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:14.813634   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:15.312838   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:15.813633   29448 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0601 12:13:15.865799   29448 kubeadm.go:1045] duration metric: took 13.314451092s to wait for elevateKubeSystemPrivileges.
	I0601 12:13:15.865815   29448 kubeadm.go:397] StartCluster complete in 5m24.394950441s
	I0601 12:13:15.865834   29448 settings.go:142] acquiring lock: {Name:mk630944d7da2d6f5ad8bc7bd2a815aad6529f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 12:13:15.865914   29448 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 12:13:15.866468   29448 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk924f4ba24fa74a0cb052299e0cc4e825b209a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 12:13:16.381614   29448 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220601120641-16804" rescaled to 1
	I0601 12:13:16.381652   29448 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 12:13:16.381667   29448 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 12:13:16.381684   29448 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0601 12:13:16.402996   29448 out.go:177] * Verifying Kubernetes components...
	I0601 12:13:16.381820   29448 config.go:178] Loaded profile config "default-k8s-different-port-20220601120641-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 12:13:16.403082   29448 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220601120641-16804"
	I0601 12:13:16.403082   29448 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220601120641-16804"
	I0601 12:13:16.403090   29448 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220601120641-16804"
	I0601 12:13:16.403091   29448 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220601120641-16804"
	I0601 12:13:16.436014   29448 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0601 12:13:16.444972   29448 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220601120641-16804"
	I0601 12:13:16.444984   29448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	W0601 12:13:16.444988   29448 addons.go:165] addon storage-provisioner should already be in state true
	I0601 12:13:16.444984   29448 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220601120641-16804"
	W0601 12:13:16.445001   29448 addons.go:165] addon metrics-server should already be in state true
	I0601 12:13:16.444972   29448 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220601120641-16804"
	I0601 12:13:16.445009   29448 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220601120641-16804"
	W0601 12:13:16.445028   29448 addons.go:165] addon dashboard should already be in state true
	I0601 12:13:16.445041   29448 host.go:66] Checking if "default-k8s-different-port-20220601120641-16804" exists ...
	I0601 12:13:16.445047   29448 host.go:66] Checking if "default-k8s-different-port-20220601120641-16804" exists ...
	I0601 12:13:16.445080   29448 host.go:66] Checking if "default-k8s-different-port-20220601120641-16804" exists ...
	I0601 12:13:16.445392   29448 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601120641-16804 --format={{.State.Status}}
	I0601 12:13:16.445546   29448 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601120641-16804 --format={{.State.Status}}
	I0601 12:13:16.446199   29448 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601120641-16804 --format={{.State.Status}}
	I0601 12:13:16.446648   29448 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601120641-16804 --format={{.State.Status}}
	I0601 12:13:16.610210   29448 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 12:13:16.566570   29448 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220601120641-16804"
	W0601 12:13:16.610254   29448 addons.go:165] addon default-storageclass should already be in state true
	I0601 12:13:16.589649   29448 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0601 12:13:16.610304   29448 host.go:66] Checking if "default-k8s-different-port-20220601120641-16804" exists ...
	I0601 12:13:16.631634   29448 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 12:13:16.632003   29448 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220601120641-16804 --format={{.State.Status}}
	I0601 12:13:16.673198   29448 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0601 12:13:16.652392   29448 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 12:13:16.652447   29448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 12:13:16.694686   29448 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 12:13:16.715392   29448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 12:13:16.694724   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:13:16.715482   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:13:16.715479   29448 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 12:13:16.715499   29448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 12:13:16.715581   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:13:16.768591   29448 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 12:13:16.768609   29448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 12:13:16.768738   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:13:16.815597   29448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61977 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601120641-16804/id_rsa Username:docker}
	I0601 12:13:16.816722   29448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61977 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601120641-16804/id_rsa Username:docker}
	I0601 12:13:16.819813   29448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61977 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601120641-16804/id_rsa Username:docker}
	I0601 12:13:16.860299   29448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61977 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/default-k8s-different-port-20220601120641-16804/id_rsa Username:docker}
	I0601 12:13:16.918181   29448 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 12:13:16.918199   29448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 12:13:16.918346   29448 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 12:13:16.918353   29448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 12:13:16.923461   29448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 12:13:16.955845   29448 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 12:13:16.955861   29448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 12:13:16.960053   29448 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 12:13:16.960067   29448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 12:13:17.057030   29448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 12:13:17.065869   29448 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 12:13:17.065888   29448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 12:13:17.067617   29448 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 12:13:17.067631   29448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 12:13:17.092092   29448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 12:13:17.169735   29448 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 12:13:17.169747   29448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 12:13:17.259925   29448 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 12:13:17.259945   29448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 12:13:17.263456   29448 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0601 12:13:17.263653   29448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220601120641-16804
	I0601 12:13:17.285230   29448 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 12:13:17.285253   29448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 12:13:17.348872   29448 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220601120641-16804" to be "Ready" ...
	I0601 12:13:17.358350   29448 node_ready.go:49] node "default-k8s-different-port-20220601120641-16804" has status "Ready":"True"
	I0601 12:13:17.358361   29448 node_ready.go:38] duration metric: took 9.446202ms waiting for node "default-k8s-different-port-20220601120641-16804" to be "Ready" ...
	I0601 12:13:17.358367   29448 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 12:13:17.365478   29448 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-msx2w" in "kube-system" namespace to be "Ready" ...
	I0601 12:13:17.366272   29448 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 12:13:17.366299   29448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 12:13:17.461258   29448 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 12:13:17.461280   29448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 12:13:17.555859   29448 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 12:13:17.555881   29448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 12:13:17.588051   29448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 12:13:17.782740   29448 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220601120641-16804"
	I0601 12:13:18.874759   29448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.286690533s)
	I0601 12:13:18.900484   29448 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0601 12:13:18.974946   29448 addons.go:417] enableAddons completed in 2.593269166s
	I0601 12:13:19.380036   29448 pod_ready.go:102] pod "coredns-64897985d-msx2w" in "kube-system" namespace has status "Ready":"False"
	I0601 12:13:20.881620   29448 pod_ready.go:92] pod "coredns-64897985d-msx2w" in "kube-system" namespace has status "Ready":"True"
	I0601 12:13:20.881634   29448 pod_ready.go:81] duration metric: took 3.516152058s waiting for pod "coredns-64897985d-msx2w" in "kube-system" namespace to be "Ready" ...
	I0601 12:13:20.881640   29448 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:13:20.885547   29448 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:13:20.885556   29448 pod_ready.go:81] duration metric: took 3.888742ms waiting for pod "etcd-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:13:20.885564   29448 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:13:20.890003   29448 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:13:20.890014   29448 pod_ready.go:81] duration metric: took 4.436175ms waiting for pod "kube-apiserver-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:13:20.890020   29448 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:13:20.894647   29448 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:13:20.894656   29448 pod_ready.go:81] duration metric: took 4.630025ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:13:20.894664   29448 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fvfsn" in "kube-system" namespace to be "Ready" ...
	I0601 12:13:20.899454   29448 pod_ready.go:92] pod "kube-proxy-fvfsn" in "kube-system" namespace has status "Ready":"True"
	I0601 12:13:20.899464   29448 pod_ready.go:81] duration metric: took 4.795544ms waiting for pod "kube-proxy-fvfsn" in "kube-system" namespace to be "Ready" ...
	I0601 12:13:20.899469   29448 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:13:21.277424   29448 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace has status "Ready":"True"
	I0601 12:13:21.277435   29448 pod_ready.go:81] duration metric: took 377.962599ms waiting for pod "kube-scheduler-default-k8s-different-port-20220601120641-16804" in "kube-system" namespace to be "Ready" ...
	I0601 12:13:21.277441   29448 pod_ready.go:38] duration metric: took 3.919093022s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 12:13:21.277459   29448 api_server.go:51] waiting for apiserver process to appear ...
	I0601 12:13:21.277508   29448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:13:21.294121   29448 api_server.go:71] duration metric: took 4.912483051s to wait for apiserver process to appear ...
	I0601 12:13:21.294136   29448 api_server.go:87] waiting for apiserver healthz status ...
	I0601 12:13:21.294144   29448 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61981/healthz ...
	I0601 12:13:21.299450   29448 api_server.go:266] https://127.0.0.1:61981/healthz returned 200:
	ok
	I0601 12:13:21.300543   29448 api_server.go:140] control plane version: v1.23.6
	I0601 12:13:21.300552   29448 api_server.go:130] duration metric: took 6.411199ms to wait for apiserver health ...
	I0601 12:13:21.300556   29448 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 12:13:21.483407   29448 system_pods.go:59] 8 kube-system pods found
	I0601 12:13:21.483422   29448 system_pods.go:61] "coredns-64897985d-msx2w" [d1127fd7-0fe5-4d4b-9289-613de74b6bcf] Running
	I0601 12:13:21.483426   29448 system_pods.go:61] "etcd-default-k8s-different-port-20220601120641-16804" [d42fc13c-69a2-4dee-a6e8-24487a70b0ce] Running
	I0601 12:13:21.483430   29448 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220601120641-16804" [210d3b46-6934-4899-b557-a4c62d6a5f6b] Running
	I0601 12:13:21.483434   29448 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220601120641-16804" [114eba47-9745-47bb-907a-d05960c48a09] Running
	I0601 12:13:21.483450   29448 system_pods.go:61] "kube-proxy-fvfsn" [8668781c-7d87-48f7-9927-c9180d288cd2] Running
	I0601 12:13:21.483458   29448 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220601120641-16804" [8291ec9d-8928-48ec-ae6c-5daf01b958f3] Running
	I0601 12:13:21.483464   29448 system_pods.go:61] "metrics-server-b955d9d8-nq88j" [5678caec-c59f-4853-a92e-2cb2ce89b7ab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 12:13:21.483471   29448 system_pods.go:61] "storage-provisioner" [bd72ae58-e8a6-4838-abd1-fd0b5e3a6922] Running
	I0601 12:13:21.483476   29448 system_pods.go:74] duration metric: took 182.917349ms to wait for pod list to return data ...
	I0601 12:13:21.483481   29448 default_sa.go:34] waiting for default service account to be created ...
	I0601 12:13:21.678526   29448 default_sa.go:45] found service account: "default"
	I0601 12:13:21.678538   29448 default_sa.go:55] duration metric: took 195.05379ms for default service account to be created ...
	I0601 12:13:21.678544   29448 system_pods.go:116] waiting for k8s-apps to be running ...
	I0601 12:13:21.881339   29448 system_pods.go:86] 8 kube-system pods found
	I0601 12:13:21.881353   29448 system_pods.go:89] "coredns-64897985d-msx2w" [d1127fd7-0fe5-4d4b-9289-613de74b6bcf] Running
	I0601 12:13:21.881358   29448 system_pods.go:89] "etcd-default-k8s-different-port-20220601120641-16804" [d42fc13c-69a2-4dee-a6e8-24487a70b0ce] Running
	I0601 12:13:21.881362   29448 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20220601120641-16804" [210d3b46-6934-4899-b557-a4c62d6a5f6b] Running
	I0601 12:13:21.881368   29448 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20220601120641-16804" [114eba47-9745-47bb-907a-d05960c48a09] Running
	I0601 12:13:21.881393   29448 system_pods.go:89] "kube-proxy-fvfsn" [8668781c-7d87-48f7-9927-c9180d288cd2] Running
	I0601 12:13:21.881398   29448 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20220601120641-16804" [8291ec9d-8928-48ec-ae6c-5daf01b958f3] Running
	I0601 12:13:21.881405   29448 system_pods.go:89] "metrics-server-b955d9d8-nq88j" [5678caec-c59f-4853-a92e-2cb2ce89b7ab] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 12:13:21.881410   29448 system_pods.go:89] "storage-provisioner" [bd72ae58-e8a6-4838-abd1-fd0b5e3a6922] Running
	I0601 12:13:21.881415   29448 system_pods.go:126] duration metric: took 202.869164ms to wait for k8s-apps to be running ...
	I0601 12:13:21.881433   29448 system_svc.go:44] waiting for kubelet service to be running ....
	I0601 12:13:21.881538   29448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 12:13:21.892314   29448 system_svc.go:56] duration metric: took 10.876936ms WaitForService to wait for kubelet.
	I0601 12:13:21.892326   29448 kubeadm.go:572] duration metric: took 5.510696085s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0601 12:13:21.892342   29448 node_conditions.go:102] verifying NodePressure condition ...
	I0601 12:13:22.078709   29448 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 12:13:22.078722   29448 node_conditions.go:123] node cpu capacity is 6
	I0601 12:13:22.078732   29448 node_conditions.go:105] duration metric: took 186.387199ms to run NodePressure ...
	I0601 12:13:22.078741   29448 start.go:213] waiting for startup goroutines ...
	I0601 12:13:22.114593   29448 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0601 12:13:22.136695   29448 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20220601120641-16804" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-01 19:07:48 UTC, end at Wed 2022-06-01 19:14:18 UTC. --
	Jun 01 19:12:32 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:12:32.207012137Z" level=info msg="ignoring event" container=b47b7c3c990ac37ab1c974069fa7c0a54631dfeec768b91a12760591905c4fdc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:12:42 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:12:42.293644666Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=d8035d66f8c3e8511bbdb76253b2965fe76e0e3401c8ecda3ad06842ac006c6a
	Jun 01 19:12:42 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:12:42.323140472Z" level=info msg="ignoring event" container=d8035d66f8c3e8511bbdb76253b2965fe76e0e3401c8ecda3ad06842ac006c6a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:12:42 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:12:42.438331519Z" level=info msg="ignoring event" container=a242c73eb08036ce5365f39788336cca723b66cb88c8b98804f4cc89803aeccb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:12:52 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:12:52.530460784Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=56d339bec0c7859716007957685ce5f4da5d5481608f48916a5c5d85c97064a1
	Jun 01 19:12:52 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:12:52.588547008Z" level=info msg="ignoring event" container=56d339bec0c7859716007957685ce5f4da5d5481608f48916a5c5d85c97064a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:12:52 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:12:52.687376191Z" level=info msg="ignoring event" container=b4f69b7ef98c89c2f1abb4d8550d04e660673ab0f8576c1c9182e545cb96b834 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:12:52 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:12:52.792503094Z" level=info msg="ignoring event" container=50056c971d28ac5f25188074831450a3b7daef25534632090b1c607d35e2632f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:12:52 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:12:52.890336901Z" level=info msg="ignoring event" container=ade63967b73a38bc798bb21b9699e74b796ea09a262ef225722416aa0b253b6e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:12:52 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:12:52.998508161Z" level=info msg="ignoring event" container=cda462e6ac46846080fb472005d04b746b9fda633abc3bcde025e5e81986a08e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:13:18 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:13:18.603734442Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 19:13:18 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:13:18.603850775Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 19:13:18 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:13:18.605429270Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 19:13:19 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:13:19.600603460Z" level=warning msg="reference for unknown type: " digest="sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2" remote="docker.io/kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2"
	Jun 01 19:13:25 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:13:25.588586601Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jun 01 19:13:25 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:13:25.824339442Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jun 01 19:13:29 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:13:29.190779059Z" level=info msg="ignoring event" container=4c9dde62933d7d3f2b1c265a3fc34ccb8fdf3d8984cfeb86046f2bf78773f16b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:13:29 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:13:29.537840971Z" level=info msg="ignoring event" container=5515926261d1caee0ed56717de9596057e6d0bd500dadd18ab9cae7fc5da31e5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:13:30 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:13:30.711233573Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 19:13:30 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:13:30.711277334Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 19:13:30 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:13:30.713541612Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 19:14:15 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:14:15.488536279Z" level=info msg="ignoring event" container=2af8db02a1b2ccbb722a9c4ad86f57a0a8f60b99e6862186e8d5a74492cd4488 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:14:15 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:14:15.691369896Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 19:14:15 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:14:15.691577157Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 01 19:14:15 default-k8s-different-port-20220601120641-16804 dockerd[131]: time="2022-06-01T19:14:15.736871477Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	2af8db02a1b2c       a90209bb39e3d                                                                                    3 seconds ago        Exited              dashboard-metrics-scraper   2                   a680a4a852272
	28902146038fb       kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2   53 seconds ago       Running             kubernetes-dashboard        0                   c8a49466c271d
	3083367c393b6       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   f9fbc3c6357ca
	0bb1fb7bc57bc       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   ea0ce907391e0
	d6dfd2d806a99       4c03754524064                                                                                    About a minute ago   Running             kube-proxy                  0                   a240131bf5b20
	424724c7b9917       595f327f224a4                                                                                    About a minute ago   Running             kube-scheduler              2                   8ebeaff1c6482
	52b11882ee23c       df7b72818ad2e                                                                                    About a minute ago   Running             kube-controller-manager     2                   4e8df84449d5b
	1daae67186790       8fa62c12256df                                                                                    About a minute ago   Running             kube-apiserver              2                   1444181cedc4d
	eb2d9ee592387       25f8c7f3da61c                                                                                    About a minute ago   Running             etcd                        2                   aa6ef812edbec
	
	* 
	* ==> coredns [0bb1fb7bc57b] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220601120641-16804
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220601120641-16804
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af273d6c1d2efba123f39c341ef4e1b2746b42f1
	                    minikube.k8s.io/name=default-k8s-different-port-20220601120641-16804
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T12_13_02_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 19:12:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220601120641-16804
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Jun 2022 19:14:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 19:14:12 +0000   Wed, 01 Jun 2022 19:12:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 19:14:12 +0000   Wed, 01 Jun 2022 19:12:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 19:14:12 +0000   Wed, 01 Jun 2022 19:12:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Jun 2022 19:14:12 +0000   Wed, 01 Jun 2022 19:14:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    default-k8s-different-port-20220601120641-16804
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	  System UUID:                dc4ced92-338f-4232-be6a-2c10371b9ac6
	  Boot ID:                    60fb2c64-72ec-41ec-9cdf-c18d3fde7c60
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-msx2w                                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     64s
	  kube-system                 etcd-default-k8s-different-port-20220601120641-16804                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         78s
	  kube-system                 kube-apiserver-default-k8s-different-port-20220601120641-16804             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220601120641-16804    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-proxy-fvfsn                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 kube-scheduler-default-k8s-different-port-20220601120641-16804             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 metrics-server-b955d9d8-nq88j                                              100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         62s
	  kube-system                 storage-provisioner                                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-zpjjc                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kubernetes-dashboard        kubernetes-dashboard-8469778f77-rhjjp                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 62s                kube-proxy  
	  Normal  NodeHasSufficientMemory  84s (x3 over 84s)  kubelet     Node default-k8s-different-port-20220601120641-16804 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    84s (x3 over 84s)  kubelet     Node default-k8s-different-port-20220601120641-16804 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     84s (x3 over 84s)  kubelet     Node default-k8s-different-port-20220601120641-16804 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  84s                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 84s                kubelet     Starting kubelet.
	  Normal  Starting                 77s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  77s                kubelet     Node default-k8s-different-port-20220601120641-16804 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s                kubelet     Node default-k8s-different-port-20220601120641-16804 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s                kubelet     Node default-k8s-different-port-20220601120641-16804 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  77s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                66s                kubelet     Node default-k8s-different-port-20220601120641-16804 status is now: NodeReady
	  Normal  Starting                 7s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s                 kubelet     Node default-k8s-different-port-20220601120641-16804 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s                 kubelet     Node default-k8s-different-port-20220601120641-16804 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s                 kubelet     Node default-k8s-different-port-20220601120641-16804 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             7s                 kubelet     Node default-k8s-different-port-20220601120641-16804 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  7s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7s                 kubelet     Node default-k8s-different-port-20220601120641-16804 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [eb2d9ee59238] <==
	* {"level":"info","ts":"2022-06-01T19:12:56.791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2022-06-01T19:12:56.791Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-01T19:12:56.791Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2022-06-01T19:12:56.791Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-01T19:12:56.792Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-01T19:12:56.792Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-01T19:12:56.792Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-01T19:12:57.529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2022-06-01T19:12:57.530Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-01T19:12:57.530Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2022-06-01T19:12:57.530Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2022-06-01T19:12:57.530Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-01T19:12:57.530Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2022-06-01T19:12:57.530Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-01T19:12:57.530Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T19:12:57.531Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T19:12:57.531Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T19:12:57.531Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T19:12:57.531Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:default-k8s-different-port-20220601120641-16804 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T19:12:57.531Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T19:12:57.531Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T19:12:57.532Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-01T19:12:57.533Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T19:12:57.533Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T19:12:57.540Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	
	* 
	* ==> kernel <==
	*  19:14:19 up  1:17,  0 users,  load average: 0.67, 0.61, 0.71
	Linux default-k8s-different-port-20220601120641-16804 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [1daae6718679] <==
	* I0601 19:13:00.580489       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0601 19:13:00.609847       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0601 19:13:00.727405       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0601 19:13:00.731365       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0601 19:13:00.732038       1 controller.go:611] quota admission added evaluator for: endpoints
	I0601 19:13:00.734606       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0601 19:13:01.412976       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0601 19:13:02.391010       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0601 19:13:02.398585       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0601 19:13:02.407212       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0601 19:13:02.590004       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0601 19:13:15.547040       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0601 19:13:15.659329       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0601 19:13:16.232026       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0601 19:13:17.782035       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.99.90.120]
	W0601 19:13:18.604356       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 19:13:18.604489       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 19:13:18.604496       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0601 19:13:18.808881       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.100.159.28]
	I0601 19:13:18.877862       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.111.133.107]
	W0601 19:14:18.561364       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 19:14:18.561469       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 19:14:18.561497       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [52b11882ee23] <==
	* I0601 19:13:16.162285       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0601 19:13:17.603203       1 event.go:294] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-b955d9d8 to 1"
	I0601 19:13:17.607392       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-b955d9d8-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0601 19:13:17.671260       1 replica_set.go:536] sync "kube-system/metrics-server-b955d9d8" failed with pods "metrics-server-b955d9d8-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0601 19:13:17.679084       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-b955d9d8-nq88j"
	I0601 19:13:18.690736       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-56974995fc to 1"
	I0601 19:13:18.698051       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8469778f77 to 1"
	I0601 19:13:18.698318       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 19:13:18.704310       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 19:13:18.707461       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0601 19:13:18.715062       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 19:13:18.715887       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 19:13:18.715946       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0601 19:13:18.723180       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 19:13:18.723271       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 19:13:18.726732       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 19:13:18.727464       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 19:13:18.768304       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0601 19:13:18.768572       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0601 19:13:18.768690       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0601 19:13:18.768685       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0601 19:13:18.773515       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-rhjjp"
	I0601 19:13:18.814039       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-zpjjc"
	E0601 19:14:11.731164       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0601 19:14:11.735698       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [d6dfd2d806a9] <==
	* I0601 19:13:16.207954       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0601 19:13:16.208028       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0601 19:13:16.208068       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 19:13:16.228419       1 server_others.go:206] "Using iptables Proxier"
	I0601 19:13:16.228489       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 19:13:16.228499       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 19:13:16.228674       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 19:13:16.229808       1 server.go:656] "Version info" version="v1.23.6"
	I0601 19:13:16.230514       1 config.go:317] "Starting service config controller"
	I0601 19:13:16.230549       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 19:13:16.230851       1 config.go:226] "Starting endpoint slice config controller"
	I0601 19:13:16.230878       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 19:13:16.331226       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0601 19:13:16.331316       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [424724c7b991] <==
	* E0601 19:12:59.395936       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0601 19:12:59.396020       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 19:12:59.396028       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0601 19:12:59.396225       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 19:12:59.396260       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0601 19:12:59.396322       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0601 19:12:59.396371       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0601 19:12:59.396586       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 19:12:59.396756       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0601 19:12:59.398008       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0601 19:12:59.398040       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0601 19:13:00.240676       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 19:13:00.240834       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0601 19:13:00.244717       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0601 19:13:00.244778       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0601 19:13:00.308534       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0601 19:13:00.308557       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0601 19:13:00.319731       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 19:13:00.319877       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0601 19:13:00.347294       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0601 19:13:00.347782       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0601 19:13:00.406035       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 19:13:00.406119       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 19:13:01.094805       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0601 19:13:03.168784       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 19:07:48 UTC, end at Wed 2022-06-01 19:14:20 UTC. --
	Jun 01 19:14:13 default-k8s-different-port-20220601120641-16804 kubelet[6911]: I0601 19:14:13.237129    6911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xrnn\" (UniqueName: \"kubernetes.io/projected/bd72ae58-e8a6-4838-abd1-fd0b5e3a6922-kube-api-access-5xrnn\") pod \"storage-provisioner\" (UID: \"bd72ae58-e8a6-4838-abd1-fd0b5e3a6922\") " pod="kube-system/storage-provisioner"
	Jun 01 19:14:13 default-k8s-different-port-20220601120641-16804 kubelet[6911]: I0601 19:14:13.237172    6911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4338b89b-deb5-472b-b3a2-e8316af44b6a-tmp-volume\") pod \"kubernetes-dashboard-8469778f77-rhjjp\" (UID: \"4338b89b-deb5-472b-b3a2-e8316af44b6a\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-rhjjp"
	Jun 01 19:14:13 default-k8s-different-port-20220601120641-16804 kubelet[6911]: I0601 19:14:13.237187    6911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8668781c-7d87-48f7-9927-c9180d288cd2-lib-modules\") pod \"kube-proxy-fvfsn\" (UID: \"8668781c-7d87-48f7-9927-c9180d288cd2\") " pod="kube-system/kube-proxy-fvfsn"
	Jun 01 19:14:13 default-k8s-different-port-20220601120641-16804 kubelet[6911]: I0601 19:14:13.237200    6911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bd72ae58-e8a6-4838-abd1-fd0b5e3a6922-tmp\") pod \"storage-provisioner\" (UID: \"bd72ae58-e8a6-4838-abd1-fd0b5e3a6922\") " pod="kube-system/storage-provisioner"
	Jun 01 19:14:13 default-k8s-different-port-20220601120641-16804 kubelet[6911]: I0601 19:14:13.237217    6911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xdll\" (UniqueName: \"kubernetes.io/projected/4338b89b-deb5-472b-b3a2-e8316af44b6a-kube-api-access-8xdll\") pod \"kubernetes-dashboard-8469778f77-rhjjp\" (UID: \"4338b89b-deb5-472b-b3a2-e8316af44b6a\") " pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-rhjjp"
	Jun 01 19:14:13 default-k8s-different-port-20220601120641-16804 kubelet[6911]: I0601 19:14:13.237231    6911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xxqv\" (UniqueName: \"kubernetes.io/projected/d1127fd7-0fe5-4d4b-9289-613de74b6bcf-kube-api-access-6xxqv\") pod \"coredns-64897985d-msx2w\" (UID: \"d1127fd7-0fe5-4d4b-9289-613de74b6bcf\") " pod="kube-system/coredns-64897985d-msx2w"
	Jun 01 19:14:13 default-k8s-different-port-20220601120641-16804 kubelet[6911]: I0601 19:14:13.237249    6911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7xp5\" (UniqueName: \"kubernetes.io/projected/5678caec-c59f-4853-a92e-2cb2ce89b7ab-kube-api-access-m7xp5\") pod \"metrics-server-b955d9d8-nq88j\" (UID: \"5678caec-c59f-4853-a92e-2cb2ce89b7ab\") " pod="kube-system/metrics-server-b955d9d8-nq88j"
	Jun 01 19:14:13 default-k8s-different-port-20220601120641-16804 kubelet[6911]: I0601 19:14:13.237269    6911 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75qd7\" (UniqueName: \"kubernetes.io/projected/59aefb1a-c87a-44ae-a6ff-5b1e5c03c2df-kube-api-access-75qd7\") pod \"dashboard-metrics-scraper-56974995fc-zpjjc\" (UID: \"59aefb1a-c87a-44ae-a6ff-5b1e5c03c2df\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-zpjjc"
	Jun 01 19:14:13 default-k8s-different-port-20220601120641-16804 kubelet[6911]: I0601 19:14:13.237277    6911 reconciler.go:157] "Reconciler: start to sync state"
	Jun 01 19:14:14 default-k8s-different-port-20220601120641-16804 kubelet[6911]: I0601 19:14:14.409328    6911 request.go:665] Waited for 1.161741415s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8444/api/v1/namespaces/kube-system/pods
	Jun 01 19:14:14 default-k8s-different-port-20220601120641-16804 kubelet[6911]: E0601 19:14:14.453011    6911 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-scheduler-default-k8s-different-port-20220601120641-16804\" already exists" pod="kube-system/kube-scheduler-default-k8s-different-port-20220601120641-16804"
	Jun 01 19:14:14 default-k8s-different-port-20220601120641-16804 kubelet[6911]: E0601 19:14:14.617506    6911 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"etcd-default-k8s-different-port-20220601120641-16804\" already exists" pod="kube-system/etcd-default-k8s-different-port-20220601120641-16804"
	Jun 01 19:14:14 default-k8s-different-port-20220601120641-16804 kubelet[6911]: E0601 19:14:14.812918    6911 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-apiserver-default-k8s-different-port-20220601120641-16804\" already exists" pod="kube-system/kube-apiserver-default-k8s-different-port-20220601120641-16804"
	Jun 01 19:14:15 default-k8s-different-port-20220601120641-16804 kubelet[6911]: E0601 19:14:15.012981    6911 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-default-k8s-different-port-20220601120641-16804\" already exists" pod="kube-system/kube-controller-manager-default-k8s-different-port-20220601120641-16804"
	Jun 01 19:14:15 default-k8s-different-port-20220601120641-16804 kubelet[6911]: I0601 19:14:15.314370    6911 scope.go:110] "RemoveContainer" containerID="5515926261d1caee0ed56717de9596057e6d0bd500dadd18ab9cae7fc5da31e5"
	Jun 01 19:14:15 default-k8s-different-port-20220601120641-16804 kubelet[6911]: E0601 19:14:15.737459    6911 remote_image.go:216] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 01 19:14:15 default-k8s-different-port-20220601120641-16804 kubelet[6911]: E0601 19:14:15.737513    6911 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 01 19:14:15 default-k8s-different-port-20220601120641-16804 kubelet[6911]: E0601 19:14:15.737622    6911 kuberuntime_manager.go:919] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-m7xp5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Prob
eHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{}
,TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-b955d9d8-nq88j_kube-system(5678caec-c59f-4853-a92e-2cb2ce89b7ab): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jun 01 19:14:15 default-k8s-different-port-20220601120641-16804 kubelet[6911]: E0601 19:14:15.737650    6911 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-b955d9d8-nq88j" podUID=5678caec-c59f-4853-a92e-2cb2ce89b7ab
	Jun 01 19:14:16 default-k8s-different-port-20220601120641-16804 kubelet[6911]: I0601 19:14:16.263841    6911 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-zpjjc through plugin: invalid network status for"
	Jun 01 19:14:16 default-k8s-different-port-20220601120641-16804 kubelet[6911]: I0601 19:14:16.268644    6911 scope.go:110] "RemoveContainer" containerID="5515926261d1caee0ed56717de9596057e6d0bd500dadd18ab9cae7fc5da31e5"
	Jun 01 19:14:16 default-k8s-different-port-20220601120641-16804 kubelet[6911]: I0601 19:14:16.268896    6911 scope.go:110] "RemoveContainer" containerID="2af8db02a1b2ccbb722a9c4ad86f57a0a8f60b99e6862186e8d5a74492cd4488"
	Jun 01 19:14:16 default-k8s-different-port-20220601120641-16804 kubelet[6911]: E0601 19:14:16.269065    6911 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-56974995fc-zpjjc_kubernetes-dashboard(59aefb1a-c87a-44ae-a6ff-5b1e5c03c2df)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-zpjjc" podUID=59aefb1a-c87a-44ae-a6ff-5b1e5c03c2df
	Jun 01 19:14:17 default-k8s-different-port-20220601120641-16804 kubelet[6911]: I0601 19:14:17.086345    6911 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
	Jun 01 19:14:17 default-k8s-different-port-20220601120641-16804 kubelet[6911]: I0601 19:14:17.275224    6911 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-zpjjc through plugin: invalid network status for"
	
	* 
	* ==> kubernetes-dashboard [28902146038f] <==
	* 2022/06/01 19:13:25 Using namespace: kubernetes-dashboard
	2022/06/01 19:13:25 Using in-cluster config to connect to apiserver
	2022/06/01 19:13:25 Using secret token for csrf signing
	2022/06/01 19:13:25 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/06/01 19:13:25 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/06/01 19:13:25 Successful initial request to the apiserver, version: v1.23.6
	2022/06/01 19:13:25 Generating JWE encryption key
	2022/06/01 19:13:25 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/06/01 19:13:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/06/01 19:13:25 Initializing JWE encryption key from synchronized object
	2022/06/01 19:13:25 Creating in-cluster Sidecar client
	2022/06/01 19:13:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/01 19:13:25 Serving insecurely on HTTP port: 9090
	2022/06/01 19:14:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/01 19:13:25 Starting overwatch
	
	* 
	* ==> storage-provisioner [3083367c393b] <==
	* I0601 19:13:18.389490       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0601 19:13:18.404781       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0601 19:13:18.404858       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0601 19:13:18.473740       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0601 19:13:18.474125       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220601120641-16804_903b70af-ba0f-4f4f-bbba-2aa90ba502a3!
	I0601 19:13:18.474729       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"59ef0bc1-1c20-46e6-b4d9-0741c7d0e59f", APIVersion:"v1", ResourceVersion:"511", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20220601120641-16804_903b70af-ba0f-4f4f-bbba-2aa90ba502a3 became leader
	I0601 19:13:18.574596       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220601120641-16804_903b70af-ba0f-4f4f-bbba-2aa90ba502a3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220601120641-16804 -n default-k8s-different-port-20220601120641-16804
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220601120641-16804 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-b955d9d8-nq88j
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220601120641-16804 describe pod metrics-server-b955d9d8-nq88j
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220601120641-16804 describe pod metrics-server-b955d9d8-nq88j: exit status 1 (321.897116ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-b955d9d8-nq88j" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220601120641-16804 describe pod metrics-server-b955d9d8-nq88j: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Pause (43.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (50.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-20220601121425-16804 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220601121425-16804 -n newest-cni-20220601121425-16804
E0601 12:15:44.420714   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601113004-16804/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220601121425-16804 -n newest-cni-20220601121425-16804: exit status 2 (16.118242257s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220601121425-16804 -n newest-cni-20220601121425-16804

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220601121425-16804 -n newest-cni-20220601121425-16804: exit status 2 (16.10669288s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-20220601121425-16804 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220601121425-16804 -n newest-cni-20220601121425-16804
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220601121425-16804 -n newest-cni-20220601121425-16804
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220601121425-16804
helpers_test.go:235: (dbg) docker inspect newest-cni-20220601121425-16804:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f73f4c4c0aad3e586d2aaec97536174aef7654b007b06c9dcbec04bb397c6ec2",
	        "Created": "2022-06-01T19:14:32.20488065Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 277328,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T19:15:18.964437684Z",
	            "FinishedAt": "2022-06-01T19:15:17.028178606Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/f73f4c4c0aad3e586d2aaec97536174aef7654b007b06c9dcbec04bb397c6ec2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f73f4c4c0aad3e586d2aaec97536174aef7654b007b06c9dcbec04bb397c6ec2/hostname",
	        "HostsPath": "/var/lib/docker/containers/f73f4c4c0aad3e586d2aaec97536174aef7654b007b06c9dcbec04bb397c6ec2/hosts",
	        "LogPath": "/var/lib/docker/containers/f73f4c4c0aad3e586d2aaec97536174aef7654b007b06c9dcbec04bb397c6ec2/f73f4c4c0aad3e586d2aaec97536174aef7654b007b06c9dcbec04bb397c6ec2-json.log",
	        "Name": "/newest-cni-20220601121425-16804",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-20220601121425-16804:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20220601121425-16804",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4e861d038f5871936b9adc9b2a4d5ffe8682e8656ec55ef0cba0ba0a18c56549-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4e861d038f5871936b9adc9b2a4d5ffe8682e8656ec55ef0cba0ba0a18c56549/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4e861d038f5871936b9adc9b2a4d5ffe8682e8656ec55ef0cba0ba0a18c56549/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4e861d038f5871936b9adc9b2a4d5ffe8682e8656ec55ef0cba0ba0a18c56549/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20220601121425-16804",
	                "Source": "/var/lib/docker/volumes/newest-cni-20220601121425-16804/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20220601121425-16804",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20220601121425-16804",
	                "name.minikube.sigs.k8s.io": "newest-cni-20220601121425-16804",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4e0ef8e7050eb2793689ea224c06b795a845eac59966855fef0c96c6bcdcb4c3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63286"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63287"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63288"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63284"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63285"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4e0ef8e7050e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20220601121425-16804": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f73f4c4c0aad",
	                        "newest-cni-20220601121425-16804"
	                    ],
	                    "NetworkID": "634df21d479086fd4886ba001592f0257be8d96086027910ce34ad386e6313ab",
	                    "EndpointID": "b24e76f7ca9dd0219a83c92e8de89b7ae5f7d0ac3cc7a5b90c69dfab6121e9a1",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220601121425-16804 -n newest-cni-20220601121425-16804
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-20220601121425-16804 logs -n 25
E0601 12:16:14.003818   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601113005-16804/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p newest-cni-20220601121425-16804 logs -n 25: (4.395853602s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|----------------|---------------------|---------------------|
	| logs    | embed-certs-20220601115855-16804                           | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:06 PDT |
	|         | logs -n 25                                                 |                                                 |         |                |                     |                     |
	| delete  | -p                                                         | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:06 PDT |
	|         | embed-certs-20220601115855-16804                           |                                                 |         |                |                     |                     |
	| delete  | -p                                                         | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:06 PDT |
	|         | embed-certs-20220601115855-16804                           |                                                 |         |                |                     |                     |
	| delete  | -p                                                         | disable-driver-mounts-20220601120640-16804      | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:06 PDT |
	|         | disable-driver-mounts-20220601120640-16804                 |                                                 |         |                |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:07 PDT |
	|         | default-k8s-different-port-20220601120641-16804            |                                                 |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                 |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:07 PDT | 01 Jun 22 12:07 PDT |
	|         | default-k8s-different-port-20220601120641-16804            |                                                 |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |                |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:07 PDT | 01 Jun 22 12:07 PDT |
	|         | default-k8s-different-port-20220601120641-16804            |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:07 PDT | 01 Jun 22 12:07 PDT |
	|         | default-k8s-different-port-20220601120641-16804            |                                                 |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |                |                     |                     |
	| logs    | old-k8s-version-20220601114806-16804                       | old-k8s-version-20220601114806-16804            | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:11 PDT | 01 Jun 22 12:11 PDT |
	|         | logs -n 25                                                 |                                                 |         |                |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:07 PDT | 01 Jun 22 12:13 PDT |
	|         | default-k8s-different-port-20220601120641-16804            |                                                 |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                 |         |                |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:13 PDT | 01 Jun 22 12:13 PDT |
	|         | default-k8s-different-port-20220601120641-16804            |                                                 |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |                |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:13 PDT | 01 Jun 22 12:13 PDT |
	|         | default-k8s-different-port-20220601120641-16804            |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |                |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:14 PDT | 01 Jun 22 12:14 PDT |
	|         | default-k8s-different-port-20220601120641-16804            |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601120641-16804            | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:14 PDT | 01 Jun 22 12:14 PDT |
	|         | logs -n 25                                                 |                                                 |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601120641-16804            | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:14 PDT | 01 Jun 22 12:14 PDT |
	|         | logs -n 25                                                 |                                                 |         |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:14 PDT | 01 Jun 22 12:14 PDT |
	|         | default-k8s-different-port-20220601120641-16804            |                                                 |         |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:14 PDT | 01 Jun 22 12:14 PDT |
	|         | default-k8s-different-port-20220601120641-16804            |                                                 |         |                |                     |                     |
	| start   | -p newest-cni-20220601121425-16804 --memory=2200           | newest-cni-20220601121425-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:14 PDT | 01 Jun 22 12:15 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |                |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.23.6              |                                                 |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220601121425-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:15 PDT | 01 Jun 22 12:15 PDT |
	|         | newest-cni-20220601121425-16804                            |                                                 |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220601121425-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:15 PDT | 01 Jun 22 12:15 PDT |
	|         | newest-cni-20220601121425-16804                            |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220601121425-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:15 PDT | 01 Jun 22 12:15 PDT |
	|         | newest-cni-20220601121425-16804                            |                                                 |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |                |                     |                     |
	| start   | -p newest-cni-20220601121425-16804 --memory=2200           | newest-cni-20220601121425-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:15 PDT | 01 Jun 22 12:15 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |                |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.23.6              |                                                 |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220601121425-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:15 PDT | 01 Jun 22 12:15 PDT |
	|         | newest-cni-20220601121425-16804                            |                                                 |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |                |                     |                     |
	| pause   | -p                                                         | newest-cni-20220601121425-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:15 PDT | 01 Jun 22 12:15 PDT |
	|         | newest-cni-20220601121425-16804                            |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |                |                     |                     |
	| unpause | -p                                                         | newest-cni-20220601121425-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:16 PDT | 01 Jun 22 12:16 PDT |
	|         | newest-cni-20220601121425-16804                            |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |                |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 12:15:17
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 12:15:17.696814   30017 out.go:296] Setting OutFile to fd 1 ...
	I0601 12:15:17.696973   30017 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 12:15:17.696997   30017 out.go:309] Setting ErrFile to fd 2...
	I0601 12:15:17.697002   30017 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 12:15:17.697117   30017 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 12:15:17.697435   30017 out.go:303] Setting JSON to false
	I0601 12:15:17.712247   30017 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":9887,"bootTime":1654101030,"procs":352,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 12:15:17.712361   30017 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 12:15:17.736545   30017 out.go:177] * [newest-cni-20220601121425-16804] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 12:15:17.757487   30017 notify.go:193] Checking for updates...
	I0601 12:15:17.779138   30017 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 12:15:17.801292   30017 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 12:15:17.844142   30017 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 12:15:17.865224   30017 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 12:15:17.886436   30017 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 12:15:17.908918   30017 config.go:178] Loaded profile config "newest-cni-20220601121425-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 12:15:17.909562   30017 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 12:15:17.981700   30017 docker.go:137] docker version: linux-20.10.14
	I0601 12:15:17.981838   30017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 12:15:18.111344   30017 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 19:15:18.053192342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 12:15:18.133307   30017 out.go:177] * Using the docker driver based on existing profile
	I0601 12:15:18.154970   30017 start.go:284] selected driver: docker
	I0601 12:15:18.154995   30017 start.go:806] validating driver "docker" against &{Name:newest-cni-20220601121425-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601121425-16804 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map
[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 12:15:18.155139   30017 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 12:15:18.158589   30017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 12:15:18.288149   30017 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 19:15:18.231685823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 12:15:18.288406   30017 start_flags.go:866] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0601 12:15:18.288423   30017 cni.go:95] Creating CNI manager for ""
	I0601 12:15:18.288431   30017 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 12:15:18.288445   30017 start_flags.go:306] config:
	{Name:newest-cni-20220601121425-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601121425-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false nod
e_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 12:15:18.310379   30017 out.go:177] * Starting control plane node newest-cni-20220601121425-16804 in cluster newest-cni-20220601121425-16804
	I0601 12:15:18.332359   30017 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 12:15:18.354367   30017 out.go:177] * Pulling base image ...
	I0601 12:15:18.397207   30017 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 12:15:18.397218   30017 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 12:15:18.397297   30017 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 12:15:18.397315   30017 cache.go:57] Caching tarball of preloaded images
	I0601 12:15:18.397498   30017 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 12:15:18.397520   30017 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 12:15:18.398565   30017 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601121425-16804/config.json ...
	I0601 12:15:18.463117   30017 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 12:15:18.463133   30017 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 12:15:18.463143   30017 cache.go:206] Successfully downloaded all kic artifacts
	I0601 12:15:18.463194   30017 start.go:352] acquiring machines lock for newest-cni-20220601121425-16804: {Name:mk2d27a35f2c21193ee482d3972539f56f892aa4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 12:15:18.463297   30017 start.go:356] acquired machines lock for "newest-cni-20220601121425-16804" in 67.531µs
	I0601 12:15:18.463320   30017 start.go:94] Skipping create...Using existing machine configuration
	I0601 12:15:18.463327   30017 fix.go:55] fixHost starting: 
	I0601 12:15:18.463535   30017 cli_runner.go:164] Run: docker container inspect newest-cni-20220601121425-16804 --format={{.State.Status}}
	I0601 12:15:18.531996   30017 fix.go:103] recreateIfNeeded on newest-cni-20220601121425-16804: state=Stopped err=<nil>
	W0601 12:15:18.532020   30017 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 12:15:18.575630   30017 out.go:177] * Restarting existing docker container for "newest-cni-20220601121425-16804" ...
	I0601 12:15:18.597027   30017 cli_runner.go:164] Run: docker start newest-cni-20220601121425-16804
	I0601 12:15:18.961320   30017 cli_runner.go:164] Run: docker container inspect newest-cni-20220601121425-16804 --format={{.State.Status}}
	I0601 12:15:19.039434   30017 kic.go:416] container "newest-cni-20220601121425-16804" state is running.
	I0601 12:15:19.040066   30017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220601121425-16804
	I0601 12:15:19.122459   30017 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601121425-16804/config.json ...
	I0601 12:15:19.122847   30017 machine.go:88] provisioning docker machine ...
	I0601 12:15:19.122870   30017 ubuntu.go:169] provisioning hostname "newest-cni-20220601121425-16804"
	I0601 12:15:19.122959   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:19.203538   30017 main.go:134] libmachine: Using SSH client type: native
	I0601 12:15:19.203721   30017 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63286 <nil> <nil>}
	I0601 12:15:19.203734   30017 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220601121425-16804 && echo "newest-cni-20220601121425-16804" | sudo tee /etc/hostname
	I0601 12:15:19.330895   30017 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220601121425-16804
	
	I0601 12:15:19.330974   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:19.406319   30017 main.go:134] libmachine: Using SSH client type: native
	I0601 12:15:19.406535   30017 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63286 <nil> <nil>}
	I0601 12:15:19.406550   30017 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220601121425-16804' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220601121425-16804/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220601121425-16804' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 12:15:19.525134   30017 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 12:15:19.525155   30017 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 12:15:19.525179   30017 ubuntu.go:177] setting up certificates
	I0601 12:15:19.525188   30017 provision.go:83] configureAuth start
	I0601 12:15:19.525249   30017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220601121425-16804
	I0601 12:15:19.605963   30017 provision.go:138] copyHostCerts
	I0601 12:15:19.606064   30017 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 12:15:19.606075   30017 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 12:15:19.606194   30017 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 12:15:19.606452   30017 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 12:15:19.606461   30017 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 12:15:19.606544   30017 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 12:15:19.606746   30017 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 12:15:19.606754   30017 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 12:15:19.606830   30017 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1675 bytes)
	I0601 12:15:19.606964   30017 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220601121425-16804 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220601121425-16804]
	I0601 12:15:19.708984   30017 provision.go:172] copyRemoteCerts
	I0601 12:15:19.709053   30017 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 12:15:19.709100   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:19.783853   30017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63286 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601121425-16804/id_rsa Username:docker}
	I0601 12:15:19.868774   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 12:15:19.886007   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0601 12:15:19.903518   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0601 12:15:19.920926   30017 provision.go:86] duration metric: configureAuth took 395.730263ms
	I0601 12:15:19.920938   30017 ubuntu.go:193] setting minikube options for container-runtime
	I0601 12:15:19.921089   30017 config.go:178] Loaded profile config "newest-cni-20220601121425-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 12:15:19.921150   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:19.993596   30017 main.go:134] libmachine: Using SSH client type: native
	I0601 12:15:19.993740   30017 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63286 <nil> <nil>}
	I0601 12:15:19.993749   30017 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 12:15:20.111525   30017 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 12:15:20.111538   30017 ubuntu.go:71] root file system type: overlay
	I0601 12:15:20.111694   30017 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 12:15:20.111786   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:20.184583   30017 main.go:134] libmachine: Using SSH client type: native
	I0601 12:15:20.184728   30017 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63286 <nil> <nil>}
	I0601 12:15:20.184777   30017 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 12:15:20.308016   30017 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 12:15:20.308149   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:20.384660   30017 main.go:134] libmachine: Using SSH client type: native
	I0601 12:15:20.384802   30017 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63286 <nil> <nil>}
	I0601 12:15:20.384815   30017 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 12:15:20.505728   30017 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 12:15:20.505742   30017 machine.go:91] provisioned docker machine in 1.382897342s
	I0601 12:15:20.505757   30017 start.go:306] post-start starting for "newest-cni-20220601121425-16804" (driver="docker")
	I0601 12:15:20.505772   30017 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 12:15:20.505836   30017 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 12:15:20.505881   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:20.578638   30017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63286 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601121425-16804/id_rsa Username:docker}
	I0601 12:15:20.665477   30017 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 12:15:20.669149   30017 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 12:15:20.669167   30017 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 12:15:20.669174   30017 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 12:15:20.669178   30017 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 12:15:20.669187   30017 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 12:15:20.669292   30017 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 12:15:20.669427   30017 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem -> 168042.pem in /etc/ssl/certs
	I0601 12:15:20.669624   30017 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 12:15:20.677091   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /etc/ssl/certs/168042.pem (1708 bytes)
	I0601 12:15:20.694336   30017 start.go:309] post-start completed in 188.569022ms
	I0601 12:15:20.694408   30017 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 12:15:20.694474   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:20.765912   30017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63286 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601121425-16804/id_rsa Username:docker}
	I0601 12:15:20.848377   30017 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 12:15:20.853146   30017 fix.go:57] fixHost completed within 2.389833759s
	I0601 12:15:20.853157   30017 start.go:81] releasing machines lock for "newest-cni-20220601121425-16804", held for 2.389868555s
	I0601 12:15:20.853232   30017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220601121425-16804
	I0601 12:15:20.927151   30017 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 12:15:20.927156   30017 ssh_runner.go:195] Run: systemctl --version
	I0601 12:15:20.927211   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:20.927230   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:21.005587   30017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63286 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601121425-16804/id_rsa Username:docker}
	I0601 12:15:21.008584   30017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63286 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601121425-16804/id_rsa Username:docker}
	I0601 12:15:21.222895   30017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 12:15:21.235416   30017 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 12:15:21.245540   30017 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 12:15:21.245597   30017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 12:15:21.254909   30017 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 12:15:21.269044   30017 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 12:15:21.339555   30017 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 12:15:21.409052   30017 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 12:15:21.419341   30017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 12:15:21.493119   30017 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 12:15:21.503208   30017 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 12:15:21.539095   30017 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 12:15:21.620908   30017 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0601 12:15:21.621108   30017 cli_runner.go:164] Run: docker exec -t newest-cni-20220601121425-16804 dig +short host.docker.internal
	I0601 12:15:21.756400   30017 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 12:15:21.756560   30017 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 12:15:21.760987   30017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 12:15:21.771888   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:21.870120   30017 out.go:177]   - kubelet.network-plugin=cni
	I0601 12:15:21.891289   30017 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0601 12:15:21.913108   30017 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 12:15:21.913239   30017 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 12:15:21.945830   30017 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0601 12:15:21.945847   30017 docker.go:541] Images already preloaded, skipping extraction
	I0601 12:15:21.945904   30017 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 12:15:21.978568   30017 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0601 12:15:21.978600   30017 cache_images.go:84] Images are preloaded, skipping loading
	I0601 12:15:21.978678   30017 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 12:15:22.052046   30017 cni.go:95] Creating CNI manager for ""
	I0601 12:15:22.052057   30017 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 12:15:22.052077   30017 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0601 12:15:22.052108   30017 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220601121425-16804 NodeName:newest-cni-20220601121425-16804 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:fals
e] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 12:15:22.052229   30017 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "newest-cni-20220601121425-16804"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 12:15:22.052316   30017 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220601121425-16804 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601121425-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 12:15:22.052375   30017 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 12:15:22.060110   30017 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 12:15:22.060163   30017 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 12:15:22.067009   30017 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (415 bytes)
	I0601 12:15:22.079768   30017 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 12:15:22.092592   30017 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2188 bytes)
	I0601 12:15:22.105644   30017 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0601 12:15:22.109585   30017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 12:15:22.119524   30017 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601121425-16804 for IP: 192.168.58.2
	I0601 12:15:22.119629   30017 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 12:15:22.119701   30017 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 12:15:22.119783   30017 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601121425-16804/client.key
	I0601 12:15:22.119849   30017 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601121425-16804/apiserver.key.cee25041
	I0601 12:15:22.119898   30017 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601121425-16804/proxy-client.key
	I0601 12:15:22.120087   30017 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem (1338 bytes)
	W0601 12:15:22.120128   30017 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804_empty.pem, impossibly tiny 0 bytes
	I0601 12:15:22.120139   30017 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1675 bytes)
	I0601 12:15:22.120167   30017 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 12:15:22.120203   30017 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 12:15:22.120233   30017 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1675 bytes)
	I0601 12:15:22.120294   30017 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem (1708 bytes)
	I0601 12:15:22.120897   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601121425-16804/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 12:15:22.138917   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601121425-16804/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 12:15:22.156269   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601121425-16804/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 12:15:22.173707   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601121425-16804/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 12:15:22.191392   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 12:15:22.208705   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 12:15:22.225757   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 12:15:22.243397   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0601 12:15:22.260267   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /usr/share/ca-certificates/168042.pem (1708 bytes)
	I0601 12:15:22.278248   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 12:15:22.295471   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem --> /usr/share/ca-certificates/16804.pem (1338 bytes)
	I0601 12:15:22.313361   30017 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 12:15:22.325966   30017 ssh_runner.go:195] Run: openssl version
	I0601 12:15:22.331944   30017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 12:15:22.339732   30017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 12:15:22.343889   30017 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0601 12:15:22.343932   30017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 12:15:22.349483   30017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 12:15:22.356904   30017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16804.pem && ln -fs /usr/share/ca-certificates/16804.pem /etc/ssl/certs/16804.pem"
	I0601 12:15:22.364729   30017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16804.pem
	I0601 12:15:22.368546   30017 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 18:01 /usr/share/ca-certificates/16804.pem
	I0601 12:15:22.368603   30017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16804.pem
	I0601 12:15:22.373820   30017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16804.pem /etc/ssl/certs/51391683.0"
	I0601 12:15:22.381072   30017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168042.pem && ln -fs /usr/share/ca-certificates/168042.pem /etc/ssl/certs/168042.pem"
	I0601 12:15:22.388832   30017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168042.pem
	I0601 12:15:22.393055   30017 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 18:01 /usr/share/ca-certificates/168042.pem
	I0601 12:15:22.393215   30017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168042.pem
	I0601 12:15:22.399338   30017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168042.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 12:15:22.406808   30017 kubeadm.go:395] StartCluster: {Name:newest-cni-20220601121425-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601121425-16804 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps
_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 12:15:22.406948   30017 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 12:15:22.436239   30017 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 12:15:22.444134   30017 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 12:15:22.444147   30017 kubeadm.go:626] restartCluster start
	I0601 12:15:22.444191   30017 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 12:15:22.451114   30017 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:22.451166   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:22.526099   30017 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220601121425-16804" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 12:15:22.526306   30017 kubeconfig.go:127] "newest-cni-20220601121425-16804" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 12:15:22.526699   30017 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk924f4ba24fa74a0cb052299e0cc4e825b209a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 12:15:22.528121   30017 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 12:15:22.535877   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:22.535949   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:22.544928   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:22.745924   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:22.746094   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:22.756505   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:22.947083   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:22.947296   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:22.957656   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:23.147069   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:23.147288   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:23.157915   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:23.345086   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:23.345192   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:23.353982   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:23.545059   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:23.545242   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:23.555850   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:23.745457   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:23.745640   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:23.756690   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:23.945536   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:23.945668   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:23.955857   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:24.145431   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:24.145578   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:24.155775   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:24.345524   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:24.345625   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:24.356916   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:24.545430   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:24.545593   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:24.556086   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:24.746772   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:24.746951   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:24.757868   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:24.946734   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:24.946835   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:24.957121   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:25.146700   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:25.146894   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:25.158129   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:25.346742   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:25.346876   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:25.356926   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:25.546642   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:25.546735   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:25.556027   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:25.556041   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:25.556098   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:25.564985   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:25.565004   30017 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 12:15:25.565016   30017 kubeadm.go:1092] stopping kube-system containers ...
	I0601 12:15:25.565082   30017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 12:15:25.602459   30017 docker.go:442] Stopping containers: [68d9566a5229 9866045a2740 08fe7c389d05 483211ea09d2 ad8f707a9ba6 acf8b9eb91df 0aace92ddb91 bfd9ea02d125 e12d8d3ebb52 e1445bd1efd3 f50e317e9858 11c48b791323 0b270245a55f c410fd12249e 3a157a1c3457 6ae49c2db4a0 4787fe993ca1 c862ef500594]
	I0601 12:15:25.602539   30017 ssh_runner.go:195] Run: docker stop 68d9566a5229 9866045a2740 08fe7c389d05 483211ea09d2 ad8f707a9ba6 acf8b9eb91df 0aace92ddb91 bfd9ea02d125 e12d8d3ebb52 e1445bd1efd3 f50e317e9858 11c48b791323 0b270245a55f c410fd12249e 3a157a1c3457 6ae49c2db4a0 4787fe993ca1 c862ef500594
	I0601 12:15:25.634269   30017 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 12:15:25.645054   30017 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 12:15:25.653095   30017 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun  1 19:14 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun  1 19:14 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Jun  1 19:14 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jun  1 19:14 /etc/kubernetes/scheduler.conf
	
	I0601 12:15:25.653147   30017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0601 12:15:25.660894   30017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0601 12:15:25.668544   30017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0601 12:15:25.675734   30017 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:25.675782   30017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0601 12:15:25.682775   30017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0601 12:15:25.689821   30017 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:25.689865   30017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0601 12:15:25.697022   30017 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 12:15:25.704775   30017 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 12:15:25.704788   30017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:15:25.750948   30017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:15:26.446128   30017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:15:26.578782   30017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:15:26.628887   30017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:15:26.681058   30017 api_server.go:51] waiting for apiserver process to appear ...
	I0601 12:15:26.681141   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:15:27.192960   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:15:27.692867   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:15:28.192589   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:15:28.203564   30017 api_server.go:71] duration metric: took 1.522538237s to wait for apiserver process to appear ...
	I0601 12:15:28.203585   30017 api_server.go:87] waiting for apiserver healthz status ...
	I0601 12:15:28.203598   30017 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:63285/healthz ...
	I0601 12:15:31.033454   30017 api_server.go:266] https://127.0.0.1:63285/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0601 12:15:31.033469   30017 api_server.go:102] status: https://127.0.0.1:63285/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0601 12:15:31.533706   30017 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:63285/healthz ...
	I0601 12:15:31.539933   30017 api_server.go:266] https://127.0.0.1:63285/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 12:15:31.539949   30017 api_server.go:102] status: https://127.0.0.1:63285/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 12:15:32.033574   30017 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:63285/healthz ...
	I0601 12:15:32.040068   30017 api_server.go:266] https://127.0.0.1:63285/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 12:15:32.040083   30017 api_server.go:102] status: https://127.0.0.1:63285/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 12:15:32.533712   30017 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:63285/healthz ...
	I0601 12:15:32.539591   30017 api_server.go:266] https://127.0.0.1:63285/healthz returned 200:
	ok
	I0601 12:15:32.546412   30017 api_server.go:140] control plane version: v1.23.6
	I0601 12:15:32.546424   30017 api_server.go:130] duration metric: took 4.342864983s to wait for apiserver health ...
	I0601 12:15:32.546432   30017 cni.go:95] Creating CNI manager for ""
	I0601 12:15:32.546437   30017 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 12:15:32.546449   30017 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 12:15:32.556727   30017 system_pods.go:59] 8 kube-system pods found
	I0601 12:15:32.556746   30017 system_pods.go:61] "coredns-64897985d-j2plh" [3a8967e9-d37b-4f71-b57f-0b3a34dbdf08] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0601 12:15:32.556751   30017 system_pods.go:61] "etcd-newest-cni-20220601121425-16804" [c181135a-268d-4847-8dd4-ec0e0f06226e] Running
	I0601 12:15:32.556758   30017 system_pods.go:61] "kube-apiserver-newest-cni-20220601121425-16804" [30ec5624-7260-4516-a9b7-2befbb6626aa] Running
	I0601 12:15:32.556762   30017 system_pods.go:61] "kube-controller-manager-newest-cni-20220601121425-16804" [ecf69675-926e-41de-a951-ddc2afa7194b] Running
	I0601 12:15:32.556767   30017 system_pods.go:61] "kube-proxy-w4cvx" [8cd61f44-5d14-434c-a84e-ffd68ac7bc21] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0601 12:15:32.556773   30017 system_pods.go:61] "kube-scheduler-newest-cni-20220601121425-16804" [15357952-87e8-4636-8cdf-eb7113a0682b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0601 12:15:32.556780   30017 system_pods.go:61] "metrics-server-b955d9d8-x4szx" [caffaac7-3821-49eb-b2de-cc43c2d6c5c8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 12:15:32.556784   30017 system_pods.go:61] "storage-provisioner" [ef765f27-a5f6-468b-9428-8a223e30a190] Running
	I0601 12:15:32.556788   30017 system_pods.go:74] duration metric: took 10.334849ms to wait for pod list to return data ...
	I0601 12:15:32.556794   30017 node_conditions.go:102] verifying NodePressure condition ...
	I0601 12:15:32.561801   30017 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 12:15:32.561816   30017 node_conditions.go:123] node cpu capacity is 6
	I0601 12:15:32.561826   30017 node_conditions.go:105] duration metric: took 5.028617ms to run NodePressure ...
	I0601 12:15:32.561842   30017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:15:32.734164   30017 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 12:15:32.745262   30017 ops.go:34] apiserver oom_adj: -16
	I0601 12:15:32.745275   30017 kubeadm.go:630] restartCluster took 10.301195785s
	I0601 12:15:32.745282   30017 kubeadm.go:397] StartCluster complete in 10.33855509s
	I0601 12:15:32.745298   30017 settings.go:142] acquiring lock: {Name:mk630944d7da2d6f5ad8bc7bd2a815aad6529f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 12:15:32.745396   30017 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 12:15:32.746012   30017 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk924f4ba24fa74a0cb052299e0cc4e825b209a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 12:15:32.749598   30017 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220601121425-16804" rescaled to 1
	I0601 12:15:32.749637   30017 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 12:15:32.749651   30017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 12:15:32.749675   30017 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0601 12:15:32.810213   30017 out.go:177] * Verifying Kubernetes components...
	I0601 12:15:32.749931   30017 config.go:178] Loaded profile config "newest-cni-20220601121425-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 12:15:32.810312   30017 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220601121425-16804"
	I0601 12:15:32.810314   30017 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220601121425-16804"
	I0601 12:15:32.810324   30017 addons.go:65] Setting dashboard=true in profile "newest-cni-20220601121425-16804"
	I0601 12:15:32.810351   30017 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220601121425-16804"
	I0601 12:15:32.814156   30017 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0601 12:15:32.847251   30017 addons.go:153] Setting addon dashboard=true in "newest-cni-20220601121425-16804"
	I0601 12:15:32.847255   30017 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220601121425-16804"
	W0601 12:15:32.847275   30017 addons.go:165] addon dashboard should already be in state true
	I0601 12:15:32.847284   30017 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220601121425-16804"
	W0601 12:15:32.847303   30017 addons.go:165] addon metrics-server should already be in state true
	I0601 12:15:32.847264   30017 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220601121425-16804"
	I0601 12:15:32.847322   30017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	W0601 12:15:32.847352   30017 addons.go:165] addon storage-provisioner should already be in state true
	I0601 12:15:32.847390   30017 host.go:66] Checking if "newest-cni-20220601121425-16804" exists ...
	I0601 12:15:32.847393   30017 host.go:66] Checking if "newest-cni-20220601121425-16804" exists ...
	I0601 12:15:32.847450   30017 host.go:66] Checking if "newest-cni-20220601121425-16804" exists ...
	I0601 12:15:32.847745   30017 cli_runner.go:164] Run: docker container inspect newest-cni-20220601121425-16804 --format={{.State.Status}}
	I0601 12:15:32.848906   30017 cli_runner.go:164] Run: docker container inspect newest-cni-20220601121425-16804 --format={{.State.Status}}
	I0601 12:15:32.848930   30017 cli_runner.go:164] Run: docker container inspect newest-cni-20220601121425-16804 --format={{.State.Status}}
	I0601 12:15:32.849156   30017 cli_runner.go:164] Run: docker container inspect newest-cni-20220601121425-16804 --format={{.State.Status}}
	I0601 12:15:32.874342   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:32.980743   30017 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220601121425-16804"
	W0601 12:15:32.992368   30017 addons.go:165] addon default-storageclass should already be in state true
	I0601 12:15:32.992341   30017 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0601 12:15:32.992427   30017 host.go:66] Checking if "newest-cni-20220601121425-16804" exists ...
	I0601 12:15:33.011979   30017 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 12:15:33.011996   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 12:15:33.012078   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:33.013569   30017 cli_runner.go:164] Run: docker container inspect newest-cni-20220601121425-16804 --format={{.State.Status}}
	I0601 12:15:33.037211   30017 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 12:15:33.068082   30017 api_server.go:51] waiting for apiserver process to appear ...
	I0601 12:15:33.111016   30017 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 12:15:33.111126   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:15:33.148302   30017 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0601 12:15:33.171397   30017 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 12:15:33.192123   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 12:15:33.184596   30017 api_server.go:71] duration metric: took 434.942794ms to wait for apiserver process to appear ...
	I0601 12:15:33.192147   30017 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 12:15:33.192164   30017 api_server.go:87] waiting for apiserver healthz status ...
	I0601 12:15:33.192166   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 12:15:33.192181   30017 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:63285/healthz ...
	I0601 12:15:33.192244   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:33.192273   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:33.206856   30017 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 12:15:33.206882   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 12:15:33.206858   30017 api_server.go:266] https://127.0.0.1:63285/healthz returned 200:
	ok
	I0601 12:15:33.206999   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:33.209882   30017 api_server.go:140] control plane version: v1.23.6
	I0601 12:15:33.209903   30017 api_server.go:130] duration metric: took 17.729398ms to wait for apiserver health ...
	I0601 12:15:33.209909   30017 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 12:15:33.212306   30017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63286 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601121425-16804/id_rsa Username:docker}
	I0601 12:15:33.220067   30017 system_pods.go:59] 8 kube-system pods found
	I0601 12:15:33.220104   30017 system_pods.go:61] "coredns-64897985d-j2plh" [3a8967e9-d37b-4f71-b57f-0b3a34dbdf08] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0601 12:15:33.220124   30017 system_pods.go:61] "etcd-newest-cni-20220601121425-16804" [c181135a-268d-4847-8dd4-ec0e0f06226e] Running
	I0601 12:15:33.220134   30017 system_pods.go:61] "kube-apiserver-newest-cni-20220601121425-16804" [30ec5624-7260-4516-a9b7-2befbb6626aa] Running
	I0601 12:15:33.220141   30017 system_pods.go:61] "kube-controller-manager-newest-cni-20220601121425-16804" [ecf69675-926e-41de-a951-ddc2afa7194b] Running
	I0601 12:15:33.220151   30017 system_pods.go:61] "kube-proxy-w4cvx" [8cd61f44-5d14-434c-a84e-ffd68ac7bc21] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0601 12:15:33.220167   30017 system_pods.go:61] "kube-scheduler-newest-cni-20220601121425-16804" [15357952-87e8-4636-8cdf-eb7113a0682b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0601 12:15:33.220181   30017 system_pods.go:61] "metrics-server-b955d9d8-x4szx" [caffaac7-3821-49eb-b2de-cc43c2d6c5c8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 12:15:33.220201   30017 system_pods.go:61] "storage-provisioner" [ef765f27-a5f6-468b-9428-8a223e30a190] Running
	I0601 12:15:33.220209   30017 system_pods.go:74] duration metric: took 10.294658ms to wait for pod list to return data ...
	I0601 12:15:33.220218   30017 default_sa.go:34] waiting for default service account to be created ...
	I0601 12:15:33.223744   30017 default_sa.go:45] found service account: "default"
	I0601 12:15:33.223760   30017 default_sa.go:55] duration metric: took 3.535466ms for default service account to be created ...
	I0601 12:15:33.223776   30017 kubeadm.go:572] duration metric: took 474.122479ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0601 12:15:33.223798   30017 node_conditions.go:102] verifying NodePressure condition ...
	I0601 12:15:33.228770   30017 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 12:15:33.228786   30017 node_conditions.go:123] node cpu capacity is 6
	I0601 12:15:33.228800   30017 node_conditions.go:105] duration metric: took 4.995287ms to run NodePressure ...
	I0601 12:15:33.228813   30017 start.go:213] waiting for startup goroutines ...
	I0601 12:15:33.301789   30017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63286 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601121425-16804/id_rsa Username:docker}
	I0601 12:15:33.313361   30017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63286 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601121425-16804/id_rsa Username:docker}
	I0601 12:15:33.319829   30017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63286 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601121425-16804/id_rsa Username:docker}
	I0601 12:15:33.382558   30017 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 12:15:33.382572   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 12:15:33.463411   30017 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 12:15:33.463454   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 12:15:33.479163   30017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 12:15:33.479594   30017 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 12:15:33.479617   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 12:15:33.484722   30017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 12:15:33.491345   30017 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 12:15:33.491376   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 12:15:33.575793   30017 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 12:15:33.575859   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 12:15:33.593652   30017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 12:15:33.681482   30017 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 12:15:33.681500   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 12:15:33.857868   30017 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 12:15:33.857885   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 12:15:33.892748   30017 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 12:15:33.892767   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 12:15:33.984332   30017 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 12:15:33.984347   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 12:15:34.064156   30017 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 12:15:34.064169   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 12:15:34.086338   30017 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 12:15:34.086357   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 12:15:34.173318   30017 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 12:15:34.173333   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 12:15:34.196575   30017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 12:15:34.695837   30017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.216659486s)
	I0601 12:15:34.695873   30017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.211144239s)
	I0601 12:15:34.757351   30017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.163679978s)
	I0601 12:15:34.757383   30017 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20220601121425-16804"
	I0601 12:15:34.884487   30017 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0601 12:15:34.905569   30017 addons.go:417] enableAddons completed in 2.155912622s
	I0601 12:15:34.935673   30017 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0601 12:15:34.958503   30017 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220601121425-16804" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-01 19:15:19 UTC, end at Wed 2022-06-01 19:16:12 UTC. --
	Jun 01 19:15:19 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:15:19.253063772Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 01 19:15:19 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:15:19.253102430Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 01 19:15:19 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:15:19.260191171Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jun 01 19:15:19 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:15:19.265841590Z" level=info msg="Loading containers: start."
	Jun 01 19:15:19 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:15:19.456343571Z" level=info msg="Removing stale sandbox cda3da1d1042ba983e5c58bd82e767e508dcc58b41f71f063234b141d085e4c5 (9866045a2740bb770b8a5c448481610277558bb3fa8bc2f52abcabf1811d39b0)"
	Jun 01 19:15:19 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:15:19.458304846Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 704eee32dd4a448fb95161e72caa8f9cb16e2ae1a46d9fc87bb76db4fbceee58 95d5c6d8681a3f378adbae8ece9e890ffc47e7bcc9f80976f469c250d6be359b], retrying...."
	Jun 01 19:15:19 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:15:19.547146017Z" level=info msg="Removing stale sandbox 5dfa24c43b4246b44016909c3cc10162adfcb8879dee04ccc44b0e4e93fbaf72 (c862ef500594827f73b13ef469aaccbd485ba298e2ea103b5acbdc12e53e15bc)"
	Jun 01 19:15:19 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:15:19.548418120Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint e3c96159f8338709d12db2e0d8f19b16fc35c7b931a38c937445cad07017c178 871e4fd6a1c1a9c71b268877f4e3d9d1e00cd3d3fa6394ffbda87c018423036c], retrying...."
	Jun 01 19:15:19 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:15:19.657560646Z" level=info msg="Removing stale sandbox a6406b67be79a9101ca504036f246ee0cac98eedf48c7f5a537f31a1efca7565 (4787fe993ca148e9a873cb18417ef3c965126edb086b67b73b38296644605e16)"
	Jun 01 19:15:19 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:15:19.658772977Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint e3c96159f8338709d12db2e0d8f19b16fc35c7b931a38c937445cad07017c178 d9e8317f807cc069efdbdca1f5d94a727afd804f86c364a7c82e8c8b01f60917], retrying...."
	Jun 01 19:15:19 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:15:19.685003778Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 01 19:15:19 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:15:19.725841976Z" level=info msg="Loading containers: done."
	Jun 01 19:15:19 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:15:19.735452013Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	Jun 01 19:15:19 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:15:19.735531362Z" level=info msg="Daemon has completed initialization"
	Jun 01 19:15:19 newest-cni-20220601121425-16804 systemd[1]: Started Docker Application Container Engine.
	Jun 01 19:15:19 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:15:19.760412273Z" level=info msg="API listen on [::]:2376"
	Jun 01 19:15:19 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:15:19.762927777Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 01 19:15:32 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:15:32.761109785Z" level=info msg="ignoring event" container=f5eb74069122586ecbb8de72491c40684c62e7e42c29260158bfda9e2d0e7b63 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:15:34 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:15:34.187838089Z" level=info msg="ignoring event" container=9ea85b42dfba0152e44de0d130bad25fcc7e706635d8b0a039c7d0ddd452726f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:15:34 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:15:34.265848931Z" level=info msg="ignoring event" container=ef21ce7592f74cc735034911681b88d2badc41c5d6de62fc86a3bf9d67b857fc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:15:35 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:15:35.504240231Z" level=info msg="ignoring event" container=25b1941367e50f0d3fe9a7a3c265b0ce6186d5dec187f15134bfd7ac87385063 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:15:35 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:15:35.511151753Z" level=info msg="ignoring event" container=c8006b995b7fe142c6ccb5186cda04cf5a7a1398be3c7cc07bfa514d22cecf1e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:15:36 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:15:36.515308761Z" level=info msg="ignoring event" container=fa56fc719d9b61059d0e92e514f32dad998f8ee008318a36de50d7e25bad4dc3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:15:36 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:15:36.529726523Z" level=info msg="ignoring event" container=8d71e51fbce728249dc93bc78ee243e2e415bb5e885320b14cae1d6b955f7d23 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:16:09 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:16:09.762985490Z" level=info msg="ignoring event" container=b3234ec468d6d6aba0b6482fd904c87428a735a22b9f36315007a3f9581a2889 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	9254f80cdc825       4c03754524064       39 seconds ago       Running             kube-proxy                1                   6208716e43268
	b3234ec468d6d       6e38f40d628db       40 seconds ago       Exited              storage-provisioner       1                   bd3c8e7e8e4b8
	441fa795fe1c1       595f327f224a4       45 seconds ago       Running             kube-scheduler            1                   c3dc76afa24d2
	91d8cb5a153b0       25f8c7f3da61c       45 seconds ago       Running             etcd                      1                   36ae06fc610d0
	194b9a4c9c8ec       df7b72818ad2e       45 seconds ago       Running             kube-controller-manager   1                   a6b68b0e15635
	1fe6b955488a3       8fa62c12256df       45 seconds ago       Running             kube-apiserver            1                   81546674a3803
	0aace92ddb91e       4c03754524064       About a minute ago   Exited              kube-proxy                0                   e1445bd1efd3a
	f50e317e9858b       25f8c7f3da61c       About a minute ago   Exited              etcd                      0                   3a157a1c34579
	11c48b791323c       595f327f224a4       About a minute ago   Exited              kube-scheduler            0                   6ae49c2db4a05
	0b270245a55f2       df7b72818ad2e       About a minute ago   Exited              kube-controller-manager   0                   4787fe993ca14
	c410fd12249eb       8fa62c12256df       About a minute ago   Exited              kube-apiserver            0                   c862ef5005948
	
	* 
	* ==> describe nodes <==
	* Name:               newest-cni-20220601121425-16804
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-20220601121425-16804
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af273d6c1d2efba123f39c341ef4e1b2746b42f1
	                    minikube.k8s.io/name=newest-cni-20220601121425-16804
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T12_14_48_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 19:14:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-20220601121425-16804
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Jun 2022 19:16:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 19:16:10 +0000   Wed, 01 Jun 2022 19:14:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 19:16:10 +0000   Wed, 01 Jun 2022 19:14:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 19:16:10 +0000   Wed, 01 Jun 2022 19:14:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Jun 2022 19:16:10 +0000   Wed, 01 Jun 2022 19:16:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    newest-cni-20220601121425-16804
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	  System UUID:                b6b66e9d-2c51-4b0e-b036-bbe63b69343a
	  Boot ID:                    60fb2c64-72ec-41ec-9cdf-c18d3fde7c60
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      192.168.0.0/24
	PodCIDRs:                     192.168.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-j2plh                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     72s
	  kube-system                 etcd-newest-cni-20220601121425-16804                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         85s
	  kube-system                 kube-apiserver-newest-cni-20220601121425-16804             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 kube-controller-manager-newest-cni-20220601121425-16804    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 kube-proxy-w4cvx                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-scheduler-newest-cni-20220601121425-16804             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 metrics-server-b955d9d8-x4szx                              100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         69s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-sfkqb                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  kubernetes-dashboard        kubernetes-dashboard-8469778f77-fpbtt                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 38s                kube-proxy  
	  Normal  Starting                 70s                kube-proxy  
	  Normal  NodeHasSufficientMemory  92s (x5 over 92s)  kubelet     Node newest-cni-20220601121425-16804 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    92s (x5 over 92s)  kubelet     Node newest-cni-20220601121425-16804 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     92s (x4 over 92s)  kubelet     Node newest-cni-20220601121425-16804 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  92s                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 92s                kubelet     Starting kubelet.
	  Normal  NodeHasNoDiskPressure    85s                kubelet     Node newest-cni-20220601121425-16804 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     85s                kubelet     Node newest-cni-20220601121425-16804 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  85s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  85s                kubelet     Node newest-cni-20220601121425-16804 status is now: NodeHasSufficientMemory
	  Normal  Starting                 85s                kubelet     Starting kubelet.
	  Normal  NodeReady                75s                kubelet     Node newest-cni-20220601121425-16804 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  47s                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 47s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientPID     46s (x7 over 47s)  kubelet     Node newest-cni-20220601121425-16804 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    46s (x8 over 47s)  kubelet     Node newest-cni-20220601121425-16804 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  46s (x8 over 47s)  kubelet     Node newest-cni-20220601121425-16804 status is now: NodeHasSufficientMemory
	  Normal  Starting                 3s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3s                 kubelet     Node newest-cni-20220601121425-16804 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s                 kubelet     Node newest-cni-20220601121425-16804 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s                 kubelet     Node newest-cni-20220601121425-16804 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3s                 kubelet     Node newest-cni-20220601121425-16804 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                 kubelet     Node newest-cni-20220601121425-16804 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [91d8cb5a153b] <==
	* {"level":"info","ts":"2022-06-01T19:15:27.922Z","caller":"etcdserver/server.go:843","msg":"starting etcd server","local-member-id":"b2c6679ac05f2cf1","local-server-version":"3.5.1","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-06-01T19:15:27.923Z","caller":"etcdserver/server.go:744","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-06-01T19:15:27.923Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2022-06-01T19:15:27.923Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2022-06-01T19:15:27.923Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T19:15:27.923Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T19:15:27.925Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-01T19:15:27.925Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-01T19:15:27.925Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-01T19:15:27.925Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-01T19:15:27.925Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-01T19:15:29.608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 2"}
	{"level":"info","ts":"2022-06-01T19:15:29.608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-06-01T19:15:29.608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-01T19:15:29.608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 3"}
	{"level":"info","ts":"2022-06-01T19:15:29.608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-06-01T19:15:29.608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 3"}
	{"level":"info","ts":"2022-06-01T19:15:29.608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-06-01T19:15:29.609Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:newest-cni-20220601121425-16804 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T19:15:29.609Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T19:15:29.609Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T19:15:29.609Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T19:15:29.609Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T19:15:29.611Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-01T19:15:29.612Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	
	* 
	* ==> etcd [f50e317e9858] <==
	* {"level":"info","ts":"2022-06-01T19:14:43.444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2022-06-01T19:14:43.444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-01T19:14:43.444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2022-06-01T19:14:43.444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-01T19:14:43.444Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:newest-cni-20220601121425-16804 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T19:14:43.444Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T19:14:43.445Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T19:14:43.445Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T19:14:43.445Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T19:14:43.444Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T19:14:43.445Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T19:14:43.445Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T19:14:43.445Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T19:14:43.446Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-06-01T19:14:43.446Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2022-06-01T19:14:46.426Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"110.586754ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:persistent-volume-provisioner\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2022-06-01T19:14:46.426Z","caller":"traceutil/trace.go:171","msg":"trace[1028010844] range","detail":"{range_begin:/registry/clusterroles/system:persistent-volume-provisioner; range_end:; response_count:0; response_revision:106; }","duration":"110.704074ms","start":"2022-06-01T19:14:46.315Z","end":"2022-06-01T19:14:46.426Z","steps":["trace[1028010844] 'agreement among raft nodes before linearized reading'  (duration: 35.574888ms)","trace[1028010844] 'range keys from in-memory index tree'  (duration: 74.994545ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-01T19:15:05.091Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-06-01T19:15:05.091Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"newest-cni-20220601121425-16804","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	WARNING: 2022/06/01 19:15:05 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/06/01 19:15:05 [core] grpc: addrConn.createTransport failed to connect to {192.168.58.2:2379 192.168.58.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.58.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-06-01T19:15:05.100Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b2c6679ac05f2cf1","current-leader-member-id":"b2c6679ac05f2cf1"}
	{"level":"info","ts":"2022-06-01T19:15:05.103Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-01T19:15:05.104Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-01T19:15:05.104Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"newest-cni-20220601121425-16804","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	
	* 
	* ==> kernel <==
	*  19:16:14 up  1:18,  0 users,  load average: 3.28, 1.22, 0.90
	Linux newest-cni-20220601121425-16804 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [1fe6b955488a] <==
	* I0601 19:15:31.158049       1 cache.go:39] Caches are synced for autoregister controller
	I0601 19:15:31.158063       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0601 19:15:31.158145       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0601 19:15:31.158220       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0601 19:15:31.158240       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0601 19:15:31.161963       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0601 19:15:32.016808       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0601 19:15:32.016842       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0601 19:15:32.022381       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	W0601 19:15:32.183900       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 19:15:32.183973       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 19:15:32.183985       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0601 19:15:32.639161       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0601 19:15:32.666668       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0601 19:15:32.690153       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0601 19:15:32.701160       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0601 19:15:32.726938       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0601 19:15:34.191769       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0601 19:15:34.710189       1 controller.go:611] quota admission added evaluator for: namespaces
	I0601 19:15:34.824734       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.110.233.219]
	I0601 19:15:34.833994       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.100.155.216]
	I0601 19:16:10.744215       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0601 19:16:11.053733       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0601 19:16:11.104111       1 controller.go:611] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-apiserver [c410fd12249e] <==
	* W0601 19:15:14.319298       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.322487       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.362789       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.372646       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.376937       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.396074       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.419030       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.432070       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.444038       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.497287       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.508350       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.526305       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.682507       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.724280       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.728793       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.744850       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.748317       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.793796       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.873863       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.992121       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:15.007051       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:15.013868       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:15.077985       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:15.132516       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:15.178203       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-controller-manager [0b270245a55f] <==
	* I0601 19:15:01.518897       1 shared_informer.go:247] Caches are synced for node 
	I0601 19:15:01.518937       1 range_allocator.go:173] Starting range CIDR allocator
	I0601 19:15:01.518941       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I0601 19:15:01.518949       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I0601 19:15:01.520916       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-j2plh"
	I0601 19:15:01.521459       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0601 19:15:01.524928       1 range_allocator.go:374] Set node newest-cni-20220601121425-16804 PodCIDR to [192.168.0.0/24]
	I0601 19:15:01.526165       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-6rmbc"
	I0601 19:15:01.534380       1 shared_informer.go:247] Caches are synced for namespace 
	I0601 19:15:01.603449       1 shared_informer.go:247] Caches are synced for attach detach 
	I0601 19:15:01.685793       1 shared_informer.go:247] Caches are synced for cronjob 
	I0601 19:15:01.701684       1 shared_informer.go:247] Caches are synced for TTL after finished 
	I0601 19:15:01.710686       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 19:15:01.726190       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0601 19:15:01.730567       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 19:15:01.732197       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0601 19:15:01.735978       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-6rmbc"
	I0601 19:15:01.739244       1 shared_informer.go:247] Caches are synced for job 
	I0601 19:15:02.152476       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 19:15:02.200578       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 19:15:02.200611       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0601 19:15:04.348086       1 event.go:294] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-b955d9d8 to 1"
	I0601 19:15:04.350818       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-b955d9d8-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0601 19:15:04.355507       1 replica_set.go:536] sync "kube-system/metrics-server-b955d9d8" failed with pods "metrics-server-b955d9d8-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0601 19:15:04.361133       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-b955d9d8-x4szx"
	
	* 
	* ==> kube-controller-manager [194b9a4c9c8e] <==
	* I0601 19:16:10.674733       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0601 19:16:10.678081       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0601 19:16:10.681469       1 shared_informer.go:247] Caches are synced for cronjob 
	I0601 19:16:10.701017       1 shared_informer.go:247] Caches are synced for deployment 
	I0601 19:16:10.701041       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0601 19:16:10.739121       1 shared_informer.go:247] Caches are synced for endpoint 
	I0601 19:16:10.741208       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0601 19:16:10.742535       1 shared_informer.go:247] Caches are synced for job 
	I0601 19:16:10.757909       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0601 19:16:10.808815       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 19:16:10.840290       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0601 19:16:10.850324       1 shared_informer.go:247] Caches are synced for stateful set 
	I0601 19:16:10.853026       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0601 19:16:10.855565       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0601 19:16:10.858822       1 shared_informer.go:247] Caches are synced for expand 
	I0601 19:16:10.862355       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 19:16:10.902665       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0601 19:16:10.952598       1 shared_informer.go:247] Caches are synced for attach detach 
	I0601 19:16:11.055999       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8469778f77 to 1"
	I0601 19:16:11.058053       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-56974995fc to 1"
	I0601 19:16:11.207239       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-sfkqb"
	I0601 19:16:11.208925       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-fpbtt"
	I0601 19:16:11.360947       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 19:16:11.369265       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 19:16:11.369294       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [0aace92ddb91] <==
	* I0601 19:15:02.205856       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0601 19:15:02.205944       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0601 19:15:02.205966       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 19:15:02.232561       1 server_others.go:206] "Using iptables Proxier"
	I0601 19:15:02.232634       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 19:15:02.232641       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 19:15:02.232651       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 19:15:02.233083       1 server.go:656] "Version info" version="v1.23.6"
	I0601 19:15:02.233992       1 config.go:317] "Starting service config controller"
	I0601 19:15:02.234036       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 19:15:02.234069       1 config.go:226] "Starting endpoint slice config controller"
	I0601 19:15:02.234072       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 19:15:02.336314       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0601 19:15:02.336324       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [9254f80cdc82] <==
	* I0601 19:15:34.091183       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0601 19:15:34.091251       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0601 19:15:34.091272       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 19:15:34.185368       1 server_others.go:206] "Using iptables Proxier"
	I0601 19:15:34.185418       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 19:15:34.185424       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 19:15:34.185927       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 19:15:34.186329       1 server.go:656] "Version info" version="v1.23.6"
	I0601 19:15:34.189304       1 config.go:317] "Starting service config controller"
	I0601 19:15:34.189318       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 19:15:34.189422       1 config.go:226] "Starting endpoint slice config controller"
	I0601 19:15:34.189428       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 19:15:34.290045       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0601 19:15:34.290061       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [11c48b791323] <==
	* E0601 19:14:45.330945       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0601 19:14:45.328569       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 19:14:45.330984       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0601 19:14:45.328834       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0601 19:14:45.330992       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0601 19:14:45.328846       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 19:14:45.331004       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0601 19:14:46.239492       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 19:14:46.239529       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0601 19:14:46.271497       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 19:14:46.271536       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 19:14:46.280448       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 19:14:46.280464       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0601 19:14:46.322763       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0601 19:14:46.322800       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0601 19:14:46.525387       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 19:14:46.525424       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0601 19:14:46.538416       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0601 19:14:46.538454       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0601 19:14:46.545691       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 19:14:46.545727       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0601 19:14:49.521996       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0601 19:15:05.101746       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0601 19:15:05.102734       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	I0601 19:15:05.102966       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	
	* 
	* ==> kube-scheduler [441fa795fe1c] <==
	* W0601 19:15:27.908158       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0601 19:15:28.530690       1 serving.go:348] Generated self-signed cert in-memory
	W0601 19:15:31.060714       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0601 19:15:31.060759       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0601 19:15:31.060768       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0601 19:15:31.060774       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0601 19:15:31.071235       1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.6"
	I0601 19:15:31.073227       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I0601 19:15:31.073298       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0601 19:15:31.073308       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0601 19:15:31.073353       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0601 19:15:31.083235       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0601 19:15:31.083286       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0601 19:15:31.083332       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0601 19:15:31.083355       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0601 19:15:31.083361       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0601 19:15:31.083368       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0601 19:15:31.083405       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0601 19:15:31.083453       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0601 19:15:31.173962       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 19:15:19 UTC, end at Wed 2022-06-01 19:16:15 UTC. --
	Jun 01 19:16:14 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:14.347862    3517 cni.go:381] "Error deleting pod from network" err="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.17 -j CNI-1891b405730e75c9355c4e61 -m comment --comment name: \"crio\" id: \"a8f93cf89a7d7fa11342862f94c11e0c0ae19b8c80422b584f05d21a701ed8de\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-1891b405730e75c9355c4e61':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" pod="kube-system/metrics-server-b955d9d8-x4szx" podSandboxID={Type:docker ID:a8f93cf89a7d7fa11342862f94c11e0c0ae19b8c80422b584f05d21a701ed8de} podNetnsPath="/proc/4530/ns/net" networkType="bridge" networkName="crio"
	Jun 01 19:16:14 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:14.360352    3517 cni.go:381] "Error deleting pod from network" err="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.18 -j CNI-1a985c2a499bdd44fa29b835 -m comment --comment name: \"crio\" id: \"57f714b6b87a8fa9ca152d87b1d798a8ee8835e5ef85914d22dd7ff1f1d180f8\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-1a985c2a499bdd44fa29b835':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-sfkqb" podSandboxID={Type:docker ID:57f714b6b87a8fa9ca152d87b1d798a8ee8835e5ef85914d22dd7ff1f1d180f8} podNetnsPath="/proc/4566/ns/net" networkType="bridge" networkName="crio"
	Jun 01 19:16:14 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:14.418157    3517 kubelet.go:1742] "Failed creating a mirror pod for" err="pods \"kube-scheduler-newest-cni-20220601121425-16804\" already exists" pod="kube-system/kube-scheduler-newest-cni-20220601121425-16804"
	Jun 01 19:16:14 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:14.686438    3517 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"57f714b6b87a8fa9ca152d87b1d798a8ee8835e5ef85914d22dd7ff1f1d180f8\" network for pod \"dashboard-metrics-scraper-56974995fc-sfkqb\": networkPlugin cni failed to set up pod \"dashboard-metrics-scraper-56974995fc-sfkqb_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"57f714b6b87a8fa9ca152d87b1d798a8ee8835e5ef85914d22dd7ff1f1d180f8\" network for pod \"dashboard-metrics-scraper-56974995fc-sfkqb\": networkPlugin cni failed to teardown pod \"dashboard-metrics-scraper-56974995fc-sfkqb_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.18 -j CNI-1a985c2a499bdd44fa29b835 -m comment --comment name: \"crio\" id: \"57f714b6b87a8fa9ca152d
87b1d798a8ee8835e5ef85914d22dd7ff1f1d180f8\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-1a985c2a499bdd44fa29b835':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	Jun 01 19:16:14 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:14.686641    3517 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"57f714b6b87a8fa9ca152d87b1d798a8ee8835e5ef85914d22dd7ff1f1d180f8\" network for pod \"dashboard-metrics-scraper-56974995fc-sfkqb\": networkPlugin cni failed to set up pod \"dashboard-metrics-scraper-56974995fc-sfkqb_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"57f714b6b87a8fa9ca152d87b1d798a8ee8835e5ef85914d22dd7ff1f1d180f8\" network for pod \"dashboard-metrics-scraper-56974995fc-sfkqb\": networkPlugin cni failed to teardown pod \"dashboard-metrics-scraper-56974995fc-sfkqb_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.18 -j CNI-1a985c2a499bdd44fa29b835 -m comment --comment name: \"crio\" id: \"57f714b6b87a8fa9ca152d87b1d
798a8ee8835e5ef85914d22dd7ff1f1d180f8\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-1a985c2a499bdd44fa29b835':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-sfkqb"
	Jun 01 19:16:14 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:14.686759    3517 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"57f714b6b87a8fa9ca152d87b1d798a8ee8835e5ef85914d22dd7ff1f1d180f8\" network for pod \"dashboard-metrics-scraper-56974995fc-sfkqb\": networkPlugin cni failed to set up pod \"dashboard-metrics-scraper-56974995fc-sfkqb_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"57f714b6b87a8fa9ca152d87b1d798a8ee8835e5ef85914d22dd7ff1f1d180f8\" network for pod \"dashboard-metrics-scraper-56974995fc-sfkqb\": networkPlugin cni failed to teardown pod \"dashboard-metrics-scraper-56974995fc-sfkqb_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.18 -j CNI-1a985c2a499bdd44fa29b835 -m comment --comment name: \"crio\" id: \"57f714b6b87a8fa9ca152d87b1d
798a8ee8835e5ef85914d22dd7ff1f1d180f8\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-1a985c2a499bdd44fa29b835':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-sfkqb"
	Jun 01 19:16:14 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:14.687002    3517 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dashboard-metrics-scraper-56974995fc-sfkqb_kubernetes-dashboard(1b46de0b-afdd-480c-b25b-a6ca05ecd307)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dashboard-metrics-scraper-56974995fc-sfkqb_kubernetes-dashboard(1b46de0b-afdd-480c-b25b-a6ca05ecd307)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"57f714b6b87a8fa9ca152d87b1d798a8ee8835e5ef85914d22dd7ff1f1d180f8\\\" network for pod \\\"dashboard-metrics-scraper-56974995fc-sfkqb\\\": networkPlugin cni failed to set up pod \\\"dashboard-metrics-scraper-56974995fc-sfkqb_kubernetes-dashboard\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"57f714b6b87a8fa9ca152d87b1d798a8ee8835e5ef85914d22dd7ff1f1d180f8\\\" network for pod \\\"dash
board-metrics-scraper-56974995fc-sfkqb\\\": networkPlugin cni failed to teardown pod \\\"dashboard-metrics-scraper-56974995fc-sfkqb_kubernetes-dashboard\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.18 -j CNI-1a985c2a499bdd44fa29b835 -m comment --comment name: \\\"crio\\\" id: \\\"57f714b6b87a8fa9ca152d87b1d798a8ee8835e5ef85914d22dd7ff1f1d180f8\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-1a985c2a499bdd44fa29b835':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-sfkqb" podUID=1b46de0b-afdd-480c-b25b-a6ca05ecd307
	Jun 01 19:16:14 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:14.694743    3517 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"a8f93cf89a7d7fa11342862f94c11e0c0ae19b8c80422b584f05d21a701ed8de\" network for pod \"metrics-server-b955d9d8-x4szx\": networkPlugin cni failed to set up pod \"metrics-server-b955d9d8-x4szx_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"a8f93cf89a7d7fa11342862f94c11e0c0ae19b8c80422b584f05d21a701ed8de\" network for pod \"metrics-server-b955d9d8-x4szx\": networkPlugin cni failed to teardown pod \"metrics-server-b955d9d8-x4szx_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.17 -j CNI-1891b405730e75c9355c4e61 -m comment --comment name: \"crio\" id: \"a8f93cf89a7d7fa11342862f94c11e0c0ae19b8c80422b584f05d21a701ed8de\" --wait]: exit status 2: i
ptables v1.8.4 (legacy): Couldn't load target `CNI-1891b405730e75c9355c4e61':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	Jun 01 19:16:14 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:14.694836    3517 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"a8f93cf89a7d7fa11342862f94c11e0c0ae19b8c80422b584f05d21a701ed8de\" network for pod \"metrics-server-b955d9d8-x4szx\": networkPlugin cni failed to set up pod \"metrics-server-b955d9d8-x4szx_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"a8f93cf89a7d7fa11342862f94c11e0c0ae19b8c80422b584f05d21a701ed8de\" network for pod \"metrics-server-b955d9d8-x4szx\": networkPlugin cni failed to teardown pod \"metrics-server-b955d9d8-x4szx_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.17 -j CNI-1891b405730e75c9355c4e61 -m comment --comment name: \"crio\" id: \"a8f93cf89a7d7fa11342862f94c11e0c0ae19b8c80422b584f05d21a701ed8de\" --wait]: exit status 2: iptabl
es v1.8.4 (legacy): Couldn't load target `CNI-1891b405730e75c9355c4e61':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/metrics-server-b955d9d8-x4szx"
	Jun 01 19:16:14 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:14.694862    3517 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"a8f93cf89a7d7fa11342862f94c11e0c0ae19b8c80422b584f05d21a701ed8de\" network for pod \"metrics-server-b955d9d8-x4szx\": networkPlugin cni failed to set up pod \"metrics-server-b955d9d8-x4szx_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"a8f93cf89a7d7fa11342862f94c11e0c0ae19b8c80422b584f05d21a701ed8de\" network for pod \"metrics-server-b955d9d8-x4szx\": networkPlugin cni failed to teardown pod \"metrics-server-b955d9d8-x4szx_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.17 -j CNI-1891b405730e75c9355c4e61 -m comment --comment name: \"crio\" id: \"a8f93cf89a7d7fa11342862f94c11e0c0ae19b8c80422b584f05d21a701ed8de\" --wait]: exit status 2: iptabl
es v1.8.4 (legacy): Couldn't load target `CNI-1891b405730e75c9355c4e61':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/metrics-server-b955d9d8-x4szx"
	Jun 01 19:16:14 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:14.694941    3517 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"metrics-server-b955d9d8-x4szx_kube-system(caffaac7-3821-49eb-b2de-cc43c2d6c5c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"metrics-server-b955d9d8-x4szx_kube-system(caffaac7-3821-49eb-b2de-cc43c2d6c5c8)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"a8f93cf89a7d7fa11342862f94c11e0c0ae19b8c80422b584f05d21a701ed8de\\\" network for pod \\\"metrics-server-b955d9d8-x4szx\\\": networkPlugin cni failed to set up pod \\\"metrics-server-b955d9d8-x4szx_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"a8f93cf89a7d7fa11342862f94c11e0c0ae19b8c80422b584f05d21a701ed8de\\\" network for pod \\\"metrics-server-b955d9d8-x4szx\\\": networkPlugin cni failed to teardown pod \\\"met
rics-server-b955d9d8-x4szx_kube-system\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.17 -j CNI-1891b405730e75c9355c4e61 -m comment --comment name: \\\"crio\\\" id: \\\"a8f93cf89a7d7fa11342862f94c11e0c0ae19b8c80422b584f05d21a701ed8de\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-1891b405730e75c9355c4e61':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/metrics-server-b955d9d8-x4szx" podUID=caffaac7-3821-49eb-b2de-cc43c2d6c5c8
	Jun 01 19:16:14 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:14.703098    3517 cni.go:362] "Error adding pod to network" err="failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-64897985d-j2plh" podSandboxID={Type:docker ID:26a97de280af9a74f67ca36222be81df55e4ba17963b29a45b439d8eaa2cf6a2} podNetnsPath="/proc/4776/ns/net" networkType="bridge" networkName="crio"
	Jun 01 19:16:14 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:14.741641    3517 cni.go:381] "Error deleting pod from network" err="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.19 -j CNI-c13df83d79ee5f1964178bbb -m comment --comment name: \"crio\" id: \"26a97de280af9a74f67ca36222be81df55e4ba17963b29a45b439d8eaa2cf6a2\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-c13df83d79ee5f1964178bbb':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" pod="kube-system/coredns-64897985d-j2plh" podSandboxID={Type:docker ID:26a97de280af9a74f67ca36222be81df55e4ba17963b29a45b439d8eaa2cf6a2} podNetnsPath="/proc/4776/ns/net" networkType="bridge" networkName="crio"
	Jun 01 19:16:15 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:15.098883    3517 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"26a97de280af9a74f67ca36222be81df55e4ba17963b29a45b439d8eaa2cf6a2\" network for pod \"coredns-64897985d-j2plh\": networkPlugin cni failed to set up pod \"coredns-64897985d-j2plh_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"26a97de280af9a74f67ca36222be81df55e4ba17963b29a45b439d8eaa2cf6a2\" network for pod \"coredns-64897985d-j2plh\": networkPlugin cni failed to teardown pod \"coredns-64897985d-j2plh_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.19 -j CNI-c13df83d79ee5f1964178bbb -m comment --comment name: \"crio\" id: \"26a97de280af9a74f67ca36222be81df55e4ba17963b29a45b439d8eaa2cf6a2\" --wait]: exit status 2: iptables v1.8.4 (legacy):
Couldn't load target `CNI-c13df83d79ee5f1964178bbb':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	Jun 01 19:16:15 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:15.098930    3517 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"26a97de280af9a74f67ca36222be81df55e4ba17963b29a45b439d8eaa2cf6a2\" network for pod \"coredns-64897985d-j2plh\": networkPlugin cni failed to set up pod \"coredns-64897985d-j2plh_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"26a97de280af9a74f67ca36222be81df55e4ba17963b29a45b439d8eaa2cf6a2\" network for pod \"coredns-64897985d-j2plh\": networkPlugin cni failed to teardown pod \"coredns-64897985d-j2plh_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.19 -j CNI-c13df83d79ee5f1964178bbb -m comment --comment name: \"crio\" id: \"26a97de280af9a74f67ca36222be81df55e4ba17963b29a45b439d8eaa2cf6a2\" --wait]: exit status 2: iptables v1.8.4 (legacy): Coul
dn't load target `CNI-c13df83d79ee5f1964178bbb':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/coredns-64897985d-j2plh"
	Jun 01 19:16:15 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:15.098956    3517 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"26a97de280af9a74f67ca36222be81df55e4ba17963b29a45b439d8eaa2cf6a2\" network for pod \"coredns-64897985d-j2plh\": networkPlugin cni failed to set up pod \"coredns-64897985d-j2plh_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"26a97de280af9a74f67ca36222be81df55e4ba17963b29a45b439d8eaa2cf6a2\" network for pod \"coredns-64897985d-j2plh\": networkPlugin cni failed to teardown pod \"coredns-64897985d-j2plh_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.19 -j CNI-c13df83d79ee5f1964178bbb -m comment --comment name: \"crio\" id: \"26a97de280af9a74f67ca36222be81df55e4ba17963b29a45b439d8eaa2cf6a2\" --wait]: exit status 2: iptables v1.8.4 (legacy): Coul
dn't load target `CNI-c13df83d79ee5f1964178bbb':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/coredns-64897985d-j2plh"
	Jun 01 19:16:15 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:15.099030    3517 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-64897985d-j2plh_kube-system(3a8967e9-d37b-4f71-b57f-0b3a34dbdf08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-64897985d-j2plh_kube-system(3a8967e9-d37b-4f71-b57f-0b3a34dbdf08)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"26a97de280af9a74f67ca36222be81df55e4ba17963b29a45b439d8eaa2cf6a2\\\" network for pod \\\"coredns-64897985d-j2plh\\\": networkPlugin cni failed to set up pod \\\"coredns-64897985d-j2plh_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"26a97de280af9a74f67ca36222be81df55e4ba17963b29a45b439d8eaa2cf6a2\\\" network for pod \\\"coredns-64897985d-j2plh\\\": networkPlugin cni failed to teardown pod \\\"coredns-64897985d-j2plh_kube-syst
em\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.19 -j CNI-c13df83d79ee5f1964178bbb -m comment --comment name: \\\"crio\\\" id: \\\"26a97de280af9a74f67ca36222be81df55e4ba17963b29a45b439d8eaa2cf6a2\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-c13df83d79ee5f1964178bbb':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/coredns-64897985d-j2plh" podUID=3a8967e9-d37b-4f71-b57f-0b3a34dbdf08
	Jun 01 19:16:15 newest-cni-20220601121425-16804 kubelet[3517]: I0601 19:16:15.256921    3517 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"coredns-64897985d-j2plh_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"26a97de280af9a74f67ca36222be81df55e4ba17963b29a45b439d8eaa2cf6a2\""
	Jun 01 19:16:15 newest-cni-20220601121425-16804 kubelet[3517]: I0601 19:16:15.258041    3517 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="26a97de280af9a74f67ca36222be81df55e4ba17963b29a45b439d8eaa2cf6a2"
	Jun 01 19:16:15 newest-cni-20220601121425-16804 kubelet[3517]: I0601 19:16:15.261051    3517 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"26a97de280af9a74f67ca36222be81df55e4ba17963b29a45b439d8eaa2cf6a2\""
	Jun 01 19:16:15 newest-cni-20220601121425-16804 kubelet[3517]: I0601 19:16:15.269093    3517 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"dashboard-metrics-scraper-56974995fc-sfkqb_kubernetes-dashboard\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"57f714b6b87a8fa9ca152d87b1d798a8ee8835e5ef85914d22dd7ff1f1d180f8\""
	Jun 01 19:16:15 newest-cni-20220601121425-16804 kubelet[3517]: I0601 19:16:15.270892    3517 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="57f714b6b87a8fa9ca152d87b1d798a8ee8835e5ef85914d22dd7ff1f1d180f8"
	Jun 01 19:16:15 newest-cni-20220601121425-16804 kubelet[3517]: I0601 19:16:15.272414    3517 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"57f714b6b87a8fa9ca152d87b1d798a8ee8835e5ef85914d22dd7ff1f1d180f8\""
	Jun 01 19:16:15 newest-cni-20220601121425-16804 kubelet[3517]: I0601 19:16:15.273016    3517 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"a8f93cf89a7d7fa11342862f94c11e0c0ae19b8c80422b584f05d21a701ed8de\""
	Jun 01 19:16:15 newest-cni-20220601121425-16804 kubelet[3517]: I0601 19:16:15.273478    3517 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"7bb13062789beae21d787d65c242b7902afe1b34d6dc5e248a5e401cebfd9565\""
	
	* 
	* ==> storage-provisioner [b3234ec468d6] <==
	* I0601 19:15:32.998145       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0601 19:16:09.651018       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220601121425-16804 -n newest-cni-20220601121425-16804
helpers_test.go:261: (dbg) Run:  kubectl --context newest-cni-20220601121425-16804 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Done: kubectl --context newest-cni-20220601121425-16804 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: (2.04403774s)
helpers_test.go:270: non-running pods: coredns-64897985d-j2plh metrics-server-b955d9d8-x4szx dashboard-metrics-scraper-56974995fc-sfkqb kubernetes-dashboard-8469778f77-fpbtt
helpers_test.go:272: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context newest-cni-20220601121425-16804 describe pod coredns-64897985d-j2plh metrics-server-b955d9d8-x4szx dashboard-metrics-scraper-56974995fc-sfkqb kubernetes-dashboard-8469778f77-fpbtt
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context newest-cni-20220601121425-16804 describe pod coredns-64897985d-j2plh metrics-server-b955d9d8-x4szx dashboard-metrics-scraper-56974995fc-sfkqb kubernetes-dashboard-8469778f77-fpbtt: exit status 1 (374.848566ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-j2plh" not found
	Error from server (NotFound): pods "metrics-server-b955d9d8-x4szx" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-sfkqb" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8469778f77-fpbtt" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context newest-cni-20220601121425-16804 describe pod coredns-64897985d-j2plh metrics-server-b955d9d8-x4szx dashboard-metrics-scraper-56974995fc-sfkqb kubernetes-dashboard-8469778f77-fpbtt: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220601121425-16804
helpers_test.go:235: (dbg) docker inspect newest-cni-20220601121425-16804:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f73f4c4c0aad3e586d2aaec97536174aef7654b007b06c9dcbec04bb397c6ec2",
	        "Created": "2022-06-01T19:14:32.20488065Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 277328,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-01T19:15:18.964437684Z",
	            "FinishedAt": "2022-06-01T19:15:17.028178606Z"
	        },
	        "Image": "sha256:5fc9565d342f677dd8987c0c7656d8d58147ab45c932c9076935b38a770e4cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/f73f4c4c0aad3e586d2aaec97536174aef7654b007b06c9dcbec04bb397c6ec2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f73f4c4c0aad3e586d2aaec97536174aef7654b007b06c9dcbec04bb397c6ec2/hostname",
	        "HostsPath": "/var/lib/docker/containers/f73f4c4c0aad3e586d2aaec97536174aef7654b007b06c9dcbec04bb397c6ec2/hosts",
	        "LogPath": "/var/lib/docker/containers/f73f4c4c0aad3e586d2aaec97536174aef7654b007b06c9dcbec04bb397c6ec2/f73f4c4c0aad3e586d2aaec97536174aef7654b007b06c9dcbec04bb397c6ec2-json.log",
	        "Name": "/newest-cni-20220601121425-16804",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-20220601121425-16804:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20220601121425-16804",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4e861d038f5871936b9adc9b2a4d5ffe8682e8656ec55ef0cba0ba0a18c56549-init/diff:/var/lib/docker/overlay2/a7bc9f1c6adb86c40dab88b0ff83084c2d8619839b3853fff9dae7ffe5649a91/diff:/var/lib/docker/overlay2/25435d341e7689153b5b1f3ab103d94dd925cee61a075d073a330405cdbd05af/diff:/var/lib/docker/overlay2/83568d387a1bd890da6e46938c943dc66012f4346dbabf5e96b09f85cfca44fe/diff:/var/lib/docker/overlay2/ca5cbd9c70a3d005491d350603a20d1c4c1f7061fc868e3e577b05308447df4c/diff:/var/lib/docker/overlay2/76b2059db7625d625832c0189df9b6173d33cf1898926f49b38b1279224e24ea/diff:/var/lib/docker/overlay2/0d832c4100492119f58403b5ac1757188e3bc934085c935a5454a0df21ee9ed8/diff:/var/lib/docker/overlay2/118799014bbf1198aa01a7a6aad0e62909c310499174c8e35d3ce8b47e6947db/diff:/var/lib/docker/overlay2/99986f402c4100a4976492abfd2b5a1e1385e5e0dea37fdd45097450bde3781c/diff:/var/lib/docker/overlay2/20dd2a8ea4ac5531ec621c9fc545feeee8634cd0453cc5480190c4473eeb8364/diff:/var/lib/docker/overlay2/4faacb
71201c10e9b3b0f88939836a34339abec8d7e37e07f5f3f4980f718b07/diff:/var/lib/docker/overlay2/bbacfddf7562d84f8b6772603002962ccc9a3be6b0579b3cbaf21b2c4d0da87b/diff:/var/lib/docker/overlay2/bb08465dc501b97f34e3924b27c8f79e882e2d8010eb66d5e808aef73afb5e47/diff:/var/lib/docker/overlay2/291600d3e1fa5fffdd396a001f887ddc63764780f65d30604188c7e913227490/diff:/var/lib/docker/overlay2/9760937ef08fe99065259f0c82065e20578b8b55825f3552cdcfe070a18b8ff2/diff:/var/lib/docker/overlay2/0d35e140b62573c1ba9a8245efa98c13bb7607ea01c4bce314347ea4ec8c780b/diff:/var/lib/docker/overlay2/3d5d03150458f950999f0e21aeeb8011be35fb849667954f6feaa4d2c6e4ae3b/diff:/var/lib/docker/overlay2/b8ad8f28a690f09a9f1966fa68270f46660fb2dff94c5b769c66828097203937/diff:/var/lib/docker/overlay2/d0ec3965c3286b3f8b62f715fc9bf0e66e9c903b76ca581d711d1a05dd24735e/diff:/var/lib/docker/overlay2/b352c170d6e87865732325f5b4c40c13a61b3feb92fda186495a93dae5aea451/diff:/var/lib/docker/overlay2/2ceb9a2177e737f38e42633a480dad1c55750b5d08c526c02d0910cf8c349cd9/diff:/var/lib/d
ocker/overlay2/42ccd3f14ab5e0afadfc4f8113b4c8747ae1aa6907e4cd57223a8eadad0bf7d4/diff:/var/lib/docker/overlay2/9d90656fbaf4ee62fbe1f548325bf292cb9c110ac9c306985533e9380c4984c8/diff:/var/lib/docker/overlay2/901797bbe051cfc2a518f0c57cbb23e44868500ea2bc08f98913aa840852939a/diff:/var/lib/docker/overlay2/408097fc2bc381937b0edfe9a74553e1f3dad5c55473818132398fa579f57637/diff:/var/lib/docker/overlay2/2e084237b1229f10f25c4e6fd085e2131bb02a8207fcaabfa930891c1969ff7e/diff:/var/lib/docker/overlay2/5e70a13cd950865cb0d02be73f5d62446f582e66e063b262d1d4577d04f431ab/diff:/var/lib/docker/overlay2/ad732ab86e0dc4abb95ba0b4a440978c8451848759d846ee02822da6d4f61eff/diff:/var/lib/docker/overlay2/bc48ed47ebaae8437c590d59d873d5629964585224cd48bd072282f6ebbd5a50/diff:/var/lib/docker/overlay2/6084baa67099034455de91a177901a8e6a16f9e8e394d9890a5da208ab41970e/diff:/var/lib/docker/overlay2/3d68472547212e455565793a499bbcd6d2f64b6affe8d4d743d650d38d7eda21/diff:/var/lib/docker/overlay2/653977a916a0b1af671ac663aca0be8a2716890007581ce1bbf20a66da0
65f94/diff:/var/lib/docker/overlay2/c73c15aa2c5fcdbb60e2a229a927785f253f88d31ee8ca4d620f9a9402b50ece/diff:/var/lib/docker/overlay2/4c9fc052f57ef0f85f3ad2e11914041b8a47ae401f27d86c5072a9790eea6877/diff:/var/lib/docker/overlay2/0540cdd7ab532bc7ea5ef34335870c09fd9ab847d589801690652526897df14e/diff:/var/lib/docker/overlay2/ecc9607dda04d45155868801f43b6cba23ea7bebdf3e4c2c51fbf409970bc3da/diff:/var/lib/docker/overlay2/6d73bc8e2d82cc3775292a4cac515ba668e8334939d147a844b3c262f2b374ce/diff:/var/lib/docker/overlay2/4c3b8e026d6a9ffe74f6dc59ceacec018e21429aa562698b799c56782762e06a/diff:/var/lib/docker/overlay2/87b63c6aed3096bebffe5284388ecda75a6ce62a36b60378a46c2bdbd426491b/diff:/var/lib/docker/overlay2/71ac1e93b44005f612c6de074670ee527af4944d7d5cec9aa8a523e0a584033a/diff:/var/lib/docker/overlay2/c2273919e078857807a444021b329cf7984077d7a9ff80c38d359fc4ad648571/diff:/var/lib/docker/overlay2/9bdcc31759c1ee0f19c47f01537ef10cd13906b320f8de211c8d80da88e835ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4e861d038f5871936b9adc9b2a4d5ffe8682e8656ec55ef0cba0ba0a18c56549/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4e861d038f5871936b9adc9b2a4d5ffe8682e8656ec55ef0cba0ba0a18c56549/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4e861d038f5871936b9adc9b2a4d5ffe8682e8656ec55ef0cba0ba0a18c56549/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20220601121425-16804",
	                "Source": "/var/lib/docker/volumes/newest-cni-20220601121425-16804/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20220601121425-16804",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20220601121425-16804",
	                "name.minikube.sigs.k8s.io": "newest-cni-20220601121425-16804",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4e0ef8e7050eb2793689ea224c06b795a845eac59966855fef0c96c6bcdcb4c3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63286"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63287"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63288"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63284"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63285"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4e0ef8e7050e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20220601121425-16804": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f73f4c4c0aad",
	                        "newest-cni-20220601121425-16804"
	                    ],
	                    "NetworkID": "634df21d479086fd4886ba001592f0257be8d96086027910ce34ad386e6313ab",
	                    "EndpointID": "b24e76f7ca9dd0219a83c92e8de89b7ae5f7d0ac3cc7a5b90c69dfab6121e9a1",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220601121425-16804 -n newest-cni-20220601121425-16804
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-20220601121425-16804 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p newest-cni-20220601121425-16804 logs -n 25: (5.085803232s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p                                                         | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:06 PDT |
	|         | embed-certs-20220601115855-16804                           |                                                 |         |                |                     |                     |
	| delete  | -p                                                         | embed-certs-20220601115855-16804                | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:06 PDT |
	|         | embed-certs-20220601115855-16804                           |                                                 |         |                |                     |                     |
	| delete  | -p                                                         | disable-driver-mounts-20220601120640-16804      | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:06 PDT |
	|         | disable-driver-mounts-20220601120640-16804                 |                                                 |         |                |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:06 PDT | 01 Jun 22 12:07 PDT |
	|         | default-k8s-different-port-20220601120641-16804            |                                                 |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                 |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:07 PDT | 01 Jun 22 12:07 PDT |
	|         | default-k8s-different-port-20220601120641-16804            |                                                 |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |                |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:07 PDT | 01 Jun 22 12:07 PDT |
	|         | default-k8s-different-port-20220601120641-16804            |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:07 PDT | 01 Jun 22 12:07 PDT |
	|         | default-k8s-different-port-20220601120641-16804            |                                                 |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |                |                     |                     |
	| logs    | old-k8s-version-20220601114806-16804                       | old-k8s-version-20220601114806-16804            | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:11 PDT | 01 Jun 22 12:11 PDT |
	|         | logs -n 25                                                 |                                                 |         |                |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:07 PDT | 01 Jun 22 12:13 PDT |
	|         | default-k8s-different-port-20220601120641-16804            |                                                 |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                 |         |                |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:13 PDT | 01 Jun 22 12:13 PDT |
	|         | default-k8s-different-port-20220601120641-16804            |                                                 |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |                |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:13 PDT | 01 Jun 22 12:13 PDT |
	|         | default-k8s-different-port-20220601120641-16804            |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |                |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:14 PDT | 01 Jun 22 12:14 PDT |
	|         | default-k8s-different-port-20220601120641-16804            |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601120641-16804            | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:14 PDT | 01 Jun 22 12:14 PDT |
	|         | logs -n 25                                                 |                                                 |         |                |                     |                     |
	| logs    | default-k8s-different-port-20220601120641-16804            | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:14 PDT | 01 Jun 22 12:14 PDT |
	|         | logs -n 25                                                 |                                                 |         |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:14 PDT | 01 Jun 22 12:14 PDT |
	|         | default-k8s-different-port-20220601120641-16804            |                                                 |         |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220601120641-16804 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:14 PDT | 01 Jun 22 12:14 PDT |
	|         | default-k8s-different-port-20220601120641-16804            |                                                 |         |                |                     |                     |
	| start   | -p newest-cni-20220601121425-16804 --memory=2200           | newest-cni-20220601121425-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:14 PDT | 01 Jun 22 12:15 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |                |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.23.6              |                                                 |         |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220601121425-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:15 PDT | 01 Jun 22 12:15 PDT |
	|         | newest-cni-20220601121425-16804                            |                                                 |         |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220601121425-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:15 PDT | 01 Jun 22 12:15 PDT |
	|         | newest-cni-20220601121425-16804                            |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220601121425-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:15 PDT | 01 Jun 22 12:15 PDT |
	|         | newest-cni-20220601121425-16804                            |                                                 |         |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |                |                     |                     |
	| start   | -p newest-cni-20220601121425-16804 --memory=2200           | newest-cni-20220601121425-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:15 PDT | 01 Jun 22 12:15 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |                |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.23.6              |                                                 |         |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220601121425-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:15 PDT | 01 Jun 22 12:15 PDT |
	|         | newest-cni-20220601121425-16804                            |                                                 |         |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |                |                     |                     |
	| pause   | -p                                                         | newest-cni-20220601121425-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:15 PDT | 01 Jun 22 12:15 PDT |
	|         | newest-cni-20220601121425-16804                            |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |                |                     |                     |
	| unpause | -p                                                         | newest-cni-20220601121425-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:16 PDT | 01 Jun 22 12:16 PDT |
	|         | newest-cni-20220601121425-16804                            |                                                 |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |                |                     |                     |
	| logs    | newest-cni-20220601121425-16804                            | newest-cni-20220601121425-16804                 | jenkins | v1.26.0-beta.1 | 01 Jun 22 12:16 PDT | 01 Jun 22 12:16 PDT |
	|         | logs -n 25                                                 |                                                 |         |                |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 12:15:17
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 12:15:17.696814   30017 out.go:296] Setting OutFile to fd 1 ...
	I0601 12:15:17.696973   30017 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 12:15:17.696997   30017 out.go:309] Setting ErrFile to fd 2...
	I0601 12:15:17.697002   30017 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 12:15:17.697117   30017 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 12:15:17.697435   30017 out.go:303] Setting JSON to false
	I0601 12:15:17.712247   30017 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":9887,"bootTime":1654101030,"procs":352,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 12:15:17.712361   30017 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 12:15:17.736545   30017 out.go:177] * [newest-cni-20220601121425-16804] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 12:15:17.757487   30017 notify.go:193] Checking for updates...
	I0601 12:15:17.779138   30017 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 12:15:17.801292   30017 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 12:15:17.844142   30017 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 12:15:17.865224   30017 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 12:15:17.886436   30017 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 12:15:17.908918   30017 config.go:178] Loaded profile config "newest-cni-20220601121425-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 12:15:17.909562   30017 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 12:15:17.981700   30017 docker.go:137] docker version: linux-20.10.14
	I0601 12:15:17.981838   30017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 12:15:18.111344   30017 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 19:15:18.053192342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 12:15:18.133307   30017 out.go:177] * Using the docker driver based on existing profile
	I0601 12:15:18.154970   30017 start.go:284] selected driver: docker
	I0601 12:15:18.154995   30017 start.go:806] validating driver "docker" against &{Name:newest-cni-20220601121425-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601121425-16804 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map
[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 12:15:18.155139   30017 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 12:15:18.158589   30017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 12:15:18.288149   30017 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 19:15:18.231685823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 12:15:18.288406   30017 start_flags.go:866] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0601 12:15:18.288423   30017 cni.go:95] Creating CNI manager for ""
	I0601 12:15:18.288431   30017 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 12:15:18.288445   30017 start_flags.go:306] config:
	{Name:newest-cni-20220601121425-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601121425-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false nod
e_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 12:15:18.310379   30017 out.go:177] * Starting control plane node newest-cni-20220601121425-16804 in cluster newest-cni-20220601121425-16804
	I0601 12:15:18.332359   30017 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 12:15:18.354367   30017 out.go:177] * Pulling base image ...
	I0601 12:15:18.397207   30017 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 12:15:18.397218   30017 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 12:15:18.397297   30017 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0601 12:15:18.397315   30017 cache.go:57] Caching tarball of preloaded images
	I0601 12:15:18.397498   30017 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 12:15:18.397520   30017 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0601 12:15:18.398565   30017 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601121425-16804/config.json ...
	I0601 12:15:18.463117   30017 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon, skipping pull
	I0601 12:15:18.463133   30017 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a exists in daemon, skipping load
	I0601 12:15:18.463143   30017 cache.go:206] Successfully downloaded all kic artifacts
	I0601 12:15:18.463194   30017 start.go:352] acquiring machines lock for newest-cni-20220601121425-16804: {Name:mk2d27a35f2c21193ee482d3972539f56f892aa4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 12:15:18.463297   30017 start.go:356] acquired machines lock for "newest-cni-20220601121425-16804" in 67.531µs
	I0601 12:15:18.463320   30017 start.go:94] Skipping create...Using existing machine configuration
	I0601 12:15:18.463327   30017 fix.go:55] fixHost starting: 
	I0601 12:15:18.463535   30017 cli_runner.go:164] Run: docker container inspect newest-cni-20220601121425-16804 --format={{.State.Status}}
	I0601 12:15:18.531996   30017 fix.go:103] recreateIfNeeded on newest-cni-20220601121425-16804: state=Stopped err=<nil>
	W0601 12:15:18.532020   30017 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 12:15:18.575630   30017 out.go:177] * Restarting existing docker container for "newest-cni-20220601121425-16804" ...
	I0601 12:15:18.597027   30017 cli_runner.go:164] Run: docker start newest-cni-20220601121425-16804
	I0601 12:15:18.961320   30017 cli_runner.go:164] Run: docker container inspect newest-cni-20220601121425-16804 --format={{.State.Status}}
	I0601 12:15:19.039434   30017 kic.go:416] container "newest-cni-20220601121425-16804" state is running.
	I0601 12:15:19.040066   30017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220601121425-16804
	I0601 12:15:19.122459   30017 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601121425-16804/config.json ...
	I0601 12:15:19.122847   30017 machine.go:88] provisioning docker machine ...
	I0601 12:15:19.122870   30017 ubuntu.go:169] provisioning hostname "newest-cni-20220601121425-16804"
	I0601 12:15:19.122959   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:19.203538   30017 main.go:134] libmachine: Using SSH client type: native
	I0601 12:15:19.203721   30017 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63286 <nil> <nil>}
	I0601 12:15:19.203734   30017 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220601121425-16804 && echo "newest-cni-20220601121425-16804" | sudo tee /etc/hostname
	I0601 12:15:19.330895   30017 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220601121425-16804
	
	I0601 12:15:19.330974   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:19.406319   30017 main.go:134] libmachine: Using SSH client type: native
	I0601 12:15:19.406535   30017 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63286 <nil> <nil>}
	I0601 12:15:19.406550   30017 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220601121425-16804' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220601121425-16804/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220601121425-16804' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 12:15:19.525134   30017 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 12:15:19.525155   30017 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 12:15:19.525179   30017 ubuntu.go:177] setting up certificates
	I0601 12:15:19.525188   30017 provision.go:83] configureAuth start
	I0601 12:15:19.525249   30017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220601121425-16804
	I0601 12:15:19.605963   30017 provision.go:138] copyHostCerts
	I0601 12:15:19.606064   30017 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 12:15:19.606075   30017 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 12:15:19.606194   30017 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 12:15:19.606452   30017 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 12:15:19.606461   30017 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 12:15:19.606544   30017 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 12:15:19.606746   30017 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 12:15:19.606754   30017 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 12:15:19.606830   30017 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1675 bytes)
	I0601 12:15:19.606964   30017 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220601121425-16804 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220601121425-16804]
	I0601 12:15:19.708984   30017 provision.go:172] copyRemoteCerts
	I0601 12:15:19.709053   30017 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 12:15:19.709100   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:19.783853   30017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63286 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601121425-16804/id_rsa Username:docker}
	I0601 12:15:19.868774   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 12:15:19.886007   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0601 12:15:19.903518   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0601 12:15:19.920926   30017 provision.go:86] duration metric: configureAuth took 395.730263ms
	I0601 12:15:19.920938   30017 ubuntu.go:193] setting minikube options for container-runtime
	I0601 12:15:19.921089   30017 config.go:178] Loaded profile config "newest-cni-20220601121425-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 12:15:19.921150   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:19.993596   30017 main.go:134] libmachine: Using SSH client type: native
	I0601 12:15:19.993740   30017 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63286 <nil> <nil>}
	I0601 12:15:19.993749   30017 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0601 12:15:20.111525   30017 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0601 12:15:20.111538   30017 ubuntu.go:71] root file system type: overlay
	I0601 12:15:20.111694   30017 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0601 12:15:20.111786   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:20.184583   30017 main.go:134] libmachine: Using SSH client type: native
	I0601 12:15:20.184728   30017 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63286 <nil> <nil>}
	I0601 12:15:20.184777   30017 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0601 12:15:20.308016   30017 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0601 12:15:20.308149   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:20.384660   30017 main.go:134] libmachine: Using SSH client type: native
	I0601 12:15:20.384802   30017 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d21c0] 0x13d5220 <nil>  [] 0s} 127.0.0.1 63286 <nil> <nil>}
	I0601 12:15:20.384815   30017 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0601 12:15:20.505728   30017 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 12:15:20.505742   30017 machine.go:91] provisioned docker machine in 1.382897342s
	I0601 12:15:20.505757   30017 start.go:306] post-start starting for "newest-cni-20220601121425-16804" (driver="docker")
	I0601 12:15:20.505772   30017 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 12:15:20.505836   30017 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 12:15:20.505881   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:20.578638   30017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63286 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601121425-16804/id_rsa Username:docker}
	I0601 12:15:20.665477   30017 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 12:15:20.669149   30017 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0601 12:15:20.669167   30017 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0601 12:15:20.669174   30017 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0601 12:15:20.669178   30017 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0601 12:15:20.669187   30017 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 12:15:20.669292   30017 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 12:15:20.669427   30017 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem -> 168042.pem in /etc/ssl/certs
	I0601 12:15:20.669624   30017 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 12:15:20.677091   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /etc/ssl/certs/168042.pem (1708 bytes)
	I0601 12:15:20.694336   30017 start.go:309] post-start completed in 188.569022ms
	I0601 12:15:20.694408   30017 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 12:15:20.694474   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:20.765912   30017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63286 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601121425-16804/id_rsa Username:docker}
	I0601 12:15:20.848377   30017 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0601 12:15:20.853146   30017 fix.go:57] fixHost completed within 2.389833759s
	I0601 12:15:20.853157   30017 start.go:81] releasing machines lock for "newest-cni-20220601121425-16804", held for 2.389868555s
	I0601 12:15:20.853232   30017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220601121425-16804
	I0601 12:15:20.927151   30017 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 12:15:20.927156   30017 ssh_runner.go:195] Run: systemctl --version
	I0601 12:15:20.927211   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:20.927230   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:21.005587   30017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63286 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601121425-16804/id_rsa Username:docker}
	I0601 12:15:21.008584   30017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63286 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601121425-16804/id_rsa Username:docker}
	I0601 12:15:21.222895   30017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0601 12:15:21.235416   30017 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 12:15:21.245540   30017 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0601 12:15:21.245597   30017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 12:15:21.254909   30017 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 12:15:21.269044   30017 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0601 12:15:21.339555   30017 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0601 12:15:21.409052   30017 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0601 12:15:21.419341   30017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 12:15:21.493119   30017 ssh_runner.go:195] Run: sudo systemctl start docker
	I0601 12:15:21.503208   30017 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 12:15:21.539095   30017 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0601 12:15:21.620908   30017 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0601 12:15:21.621108   30017 cli_runner.go:164] Run: docker exec -t newest-cni-20220601121425-16804 dig +short host.docker.internal
	I0601 12:15:21.756400   30017 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0601 12:15:21.756560   30017 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0601 12:15:21.760987   30017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 12:15:21.771888   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:21.870120   30017 out.go:177]   - kubelet.network-plugin=cni
	I0601 12:15:21.891289   30017 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0601 12:15:21.913108   30017 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0601 12:15:21.913239   30017 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 12:15:21.945830   30017 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0601 12:15:21.945847   30017 docker.go:541] Images already preloaded, skipping extraction
	I0601 12:15:21.945904   30017 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0601 12:15:21.978568   30017 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0601 12:15:21.978600   30017 cache_images.go:84] Images are preloaded, skipping loading
	I0601 12:15:21.978678   30017 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0601 12:15:22.052046   30017 cni.go:95] Creating CNI manager for ""
	I0601 12:15:22.052057   30017 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 12:15:22.052077   30017 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0601 12:15:22.052108   30017 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220601121425-16804 NodeName:newest-cni-20220601121425-16804 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:fals
e] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 12:15:22.052229   30017 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "newest-cni-20220601121425-16804"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 12:15:22.052316   30017 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220601121425-16804 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601121425-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 12:15:22.052375   30017 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 12:15:22.060110   30017 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 12:15:22.060163   30017 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 12:15:22.067009   30017 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (415 bytes)
	I0601 12:15:22.079768   30017 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 12:15:22.092592   30017 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2188 bytes)
	I0601 12:15:22.105644   30017 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0601 12:15:22.109585   30017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0601 12:15:22.119524   30017 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601121425-16804 for IP: 192.168.58.2
	I0601 12:15:22.119629   30017 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 12:15:22.119701   30017 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 12:15:22.119783   30017 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601121425-16804/client.key
	I0601 12:15:22.119849   30017 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601121425-16804/apiserver.key.cee25041
	I0601 12:15:22.119898   30017 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601121425-16804/proxy-client.key
	I0601 12:15:22.120087   30017 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem (1338 bytes)
	W0601 12:15:22.120128   30017 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804_empty.pem, impossibly tiny 0 bytes
	I0601 12:15:22.120139   30017 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1675 bytes)
	I0601 12:15:22.120167   30017 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 12:15:22.120203   30017 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 12:15:22.120233   30017 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1675 bytes)
	I0601 12:15:22.120294   30017 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem (1708 bytes)
	I0601 12:15:22.120897   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601121425-16804/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 12:15:22.138917   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601121425-16804/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 12:15:22.156269   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601121425-16804/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 12:15:22.173707   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/newest-cni-20220601121425-16804/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0601 12:15:22.191392   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 12:15:22.208705   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0601 12:15:22.225757   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 12:15:22.243397   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0601 12:15:22.260267   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/168042.pem --> /usr/share/ca-certificates/168042.pem (1708 bytes)
	I0601 12:15:22.278248   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 12:15:22.295471   30017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/16804.pem --> /usr/share/ca-certificates/16804.pem (1338 bytes)
	I0601 12:15:22.313361   30017 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 12:15:22.325966   30017 ssh_runner.go:195] Run: openssl version
	I0601 12:15:22.331944   30017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 12:15:22.339732   30017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 12:15:22.343889   30017 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0601 12:15:22.343932   30017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 12:15:22.349483   30017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 12:15:22.356904   30017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16804.pem && ln -fs /usr/share/ca-certificates/16804.pem /etc/ssl/certs/16804.pem"
	I0601 12:15:22.364729   30017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16804.pem
	I0601 12:15:22.368546   30017 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 18:01 /usr/share/ca-certificates/16804.pem
	I0601 12:15:22.368603   30017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16804.pem
	I0601 12:15:22.373820   30017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16804.pem /etc/ssl/certs/51391683.0"
	I0601 12:15:22.381072   30017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168042.pem && ln -fs /usr/share/ca-certificates/168042.pem /etc/ssl/certs/168042.pem"
	I0601 12:15:22.388832   30017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168042.pem
	I0601 12:15:22.393055   30017 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 18:01 /usr/share/ca-certificates/168042.pem
	I0601 12:15:22.393215   30017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168042.pem
	I0601 12:15:22.399338   30017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168042.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 12:15:22.406808   30017 kubeadm.go:395] StartCluster: {Name:newest-cni-20220601121425-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220601121425-16804 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps
_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 12:15:22.406948   30017 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 12:15:22.436239   30017 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 12:15:22.444134   30017 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 12:15:22.444147   30017 kubeadm.go:626] restartCluster start
	I0601 12:15:22.444191   30017 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 12:15:22.451114   30017 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:22.451166   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:22.526099   30017 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220601121425-16804" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 12:15:22.526306   30017 kubeconfig.go:127] "newest-cni-20220601121425-16804" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig - will repair!
	I0601 12:15:22.526699   30017 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk924f4ba24fa74a0cb052299e0cc4e825b209a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 12:15:22.528121   30017 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 12:15:22.535877   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:22.535949   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:22.544928   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:22.745924   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:22.746094   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:22.756505   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:22.947083   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:22.947296   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:22.957656   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:23.147069   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:23.147288   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:23.157915   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:23.345086   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:23.345192   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:23.353982   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:23.545059   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:23.545242   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:23.555850   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:23.745457   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:23.745640   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:23.756690   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:23.945536   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:23.945668   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:23.955857   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:24.145431   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:24.145578   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:24.155775   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:24.345524   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:24.345625   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:24.356916   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:24.545430   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:24.545593   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:24.556086   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:24.746772   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:24.746951   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:24.757868   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:24.946734   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:24.946835   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:24.957121   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:25.146700   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:25.146894   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:25.158129   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:25.346742   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:25.346876   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:25.356926   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:25.546642   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:25.546735   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:25.556027   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:25.556041   30017 api_server.go:165] Checking apiserver status ...
	I0601 12:15:25.556098   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 12:15:25.564985   30017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:25.565004   30017 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 12:15:25.565016   30017 kubeadm.go:1092] stopping kube-system containers ...
	I0601 12:15:25.565082   30017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0601 12:15:25.602459   30017 docker.go:442] Stopping containers: [68d9566a5229 9866045a2740 08fe7c389d05 483211ea09d2 ad8f707a9ba6 acf8b9eb91df 0aace92ddb91 bfd9ea02d125 e12d8d3ebb52 e1445bd1efd3 f50e317e9858 11c48b791323 0b270245a55f c410fd12249e 3a157a1c3457 6ae49c2db4a0 4787fe993ca1 c862ef500594]
	I0601 12:15:25.602539   30017 ssh_runner.go:195] Run: docker stop 68d9566a5229 9866045a2740 08fe7c389d05 483211ea09d2 ad8f707a9ba6 acf8b9eb91df 0aace92ddb91 bfd9ea02d125 e12d8d3ebb52 e1445bd1efd3 f50e317e9858 11c48b791323 0b270245a55f c410fd12249e 3a157a1c3457 6ae49c2db4a0 4787fe993ca1 c862ef500594
	I0601 12:15:25.634269   30017 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 12:15:25.645054   30017 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 12:15:25.653095   30017 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun  1 19:14 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun  1 19:14 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Jun  1 19:14 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jun  1 19:14 /etc/kubernetes/scheduler.conf
	
	I0601 12:15:25.653147   30017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0601 12:15:25.660894   30017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0601 12:15:25.668544   30017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0601 12:15:25.675734   30017 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:25.675782   30017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0601 12:15:25.682775   30017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0601 12:15:25.689821   30017 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 12:15:25.689865   30017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0601 12:15:25.697022   30017 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 12:15:25.704775   30017 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 12:15:25.704788   30017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:15:25.750948   30017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:15:26.446128   30017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:15:26.578782   30017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:15:26.628887   30017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:15:26.681058   30017 api_server.go:51] waiting for apiserver process to appear ...
	I0601 12:15:26.681141   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:15:27.192960   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:15:27.692867   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:15:28.192589   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:15:28.203564   30017 api_server.go:71] duration metric: took 1.522538237s to wait for apiserver process to appear ...
	I0601 12:15:28.203585   30017 api_server.go:87] waiting for apiserver healthz status ...
	I0601 12:15:28.203598   30017 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:63285/healthz ...
	I0601 12:15:31.033454   30017 api_server.go:266] https://127.0.0.1:63285/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0601 12:15:31.033469   30017 api_server.go:102] status: https://127.0.0.1:63285/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0601 12:15:31.533706   30017 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:63285/healthz ...
	I0601 12:15:31.539933   30017 api_server.go:266] https://127.0.0.1:63285/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 12:15:31.539949   30017 api_server.go:102] status: https://127.0.0.1:63285/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 12:15:32.033574   30017 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:63285/healthz ...
	I0601 12:15:32.040068   30017 api_server.go:266] https://127.0.0.1:63285/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 12:15:32.040083   30017 api_server.go:102] status: https://127.0.0.1:63285/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 12:15:32.533712   30017 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:63285/healthz ...
	I0601 12:15:32.539591   30017 api_server.go:266] https://127.0.0.1:63285/healthz returned 200:
	ok
	I0601 12:15:32.546412   30017 api_server.go:140] control plane version: v1.23.6
	I0601 12:15:32.546424   30017 api_server.go:130] duration metric: took 4.342864983s to wait for apiserver health ...
	I0601 12:15:32.546432   30017 cni.go:95] Creating CNI manager for ""
	I0601 12:15:32.546437   30017 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 12:15:32.546449   30017 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 12:15:32.556727   30017 system_pods.go:59] 8 kube-system pods found
	I0601 12:15:32.556746   30017 system_pods.go:61] "coredns-64897985d-j2plh" [3a8967e9-d37b-4f71-b57f-0b3a34dbdf08] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0601 12:15:32.556751   30017 system_pods.go:61] "etcd-newest-cni-20220601121425-16804" [c181135a-268d-4847-8dd4-ec0e0f06226e] Running
	I0601 12:15:32.556758   30017 system_pods.go:61] "kube-apiserver-newest-cni-20220601121425-16804" [30ec5624-7260-4516-a9b7-2befbb6626aa] Running
	I0601 12:15:32.556762   30017 system_pods.go:61] "kube-controller-manager-newest-cni-20220601121425-16804" [ecf69675-926e-41de-a951-ddc2afa7194b] Running
	I0601 12:15:32.556767   30017 system_pods.go:61] "kube-proxy-w4cvx" [8cd61f44-5d14-434c-a84e-ffd68ac7bc21] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0601 12:15:32.556773   30017 system_pods.go:61] "kube-scheduler-newest-cni-20220601121425-16804" [15357952-87e8-4636-8cdf-eb7113a0682b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0601 12:15:32.556780   30017 system_pods.go:61] "metrics-server-b955d9d8-x4szx" [caffaac7-3821-49eb-b2de-cc43c2d6c5c8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 12:15:32.556784   30017 system_pods.go:61] "storage-provisioner" [ef765f27-a5f6-468b-9428-8a223e30a190] Running
	I0601 12:15:32.556788   30017 system_pods.go:74] duration metric: took 10.334849ms to wait for pod list to return data ...
	I0601 12:15:32.556794   30017 node_conditions.go:102] verifying NodePressure condition ...
	I0601 12:15:32.561801   30017 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 12:15:32.561816   30017 node_conditions.go:123] node cpu capacity is 6
	I0601 12:15:32.561826   30017 node_conditions.go:105] duration metric: took 5.028617ms to run NodePressure ...
	I0601 12:15:32.561842   30017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 12:15:32.734164   30017 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 12:15:32.745262   30017 ops.go:34] apiserver oom_adj: -16
	I0601 12:15:32.745275   30017 kubeadm.go:630] restartCluster took 10.301195785s
	I0601 12:15:32.745282   30017 kubeadm.go:397] StartCluster complete in 10.33855509s
	I0601 12:15:32.745298   30017 settings.go:142] acquiring lock: {Name:mk630944d7da2d6f5ad8bc7bd2a815aad6529f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 12:15:32.745396   30017 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 12:15:32.746012   30017 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mk924f4ba24fa74a0cb052299e0cc4e825b209a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 12:15:32.749598   30017 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220601121425-16804" rescaled to 1
	I0601 12:15:32.749637   30017 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0601 12:15:32.749651   30017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 12:15:32.749675   30017 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0601 12:15:32.810213   30017 out.go:177] * Verifying Kubernetes components...
	I0601 12:15:32.749931   30017 config.go:178] Loaded profile config "newest-cni-20220601121425-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 12:15:32.810312   30017 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220601121425-16804"
	I0601 12:15:32.810314   30017 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220601121425-16804"
	I0601 12:15:32.810324   30017 addons.go:65] Setting dashboard=true in profile "newest-cni-20220601121425-16804"
	I0601 12:15:32.810351   30017 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220601121425-16804"
	I0601 12:15:32.814156   30017 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0601 12:15:32.847251   30017 addons.go:153] Setting addon dashboard=true in "newest-cni-20220601121425-16804"
	I0601 12:15:32.847255   30017 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220601121425-16804"
	W0601 12:15:32.847275   30017 addons.go:165] addon dashboard should already be in state true
	I0601 12:15:32.847284   30017 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220601121425-16804"
	W0601 12:15:32.847303   30017 addons.go:165] addon metrics-server should already be in state true
	I0601 12:15:32.847264   30017 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220601121425-16804"
	I0601 12:15:32.847322   30017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	W0601 12:15:32.847352   30017 addons.go:165] addon storage-provisioner should already be in state true
	I0601 12:15:32.847390   30017 host.go:66] Checking if "newest-cni-20220601121425-16804" exists ...
	I0601 12:15:32.847393   30017 host.go:66] Checking if "newest-cni-20220601121425-16804" exists ...
	I0601 12:15:32.847450   30017 host.go:66] Checking if "newest-cni-20220601121425-16804" exists ...
	I0601 12:15:32.847745   30017 cli_runner.go:164] Run: docker container inspect newest-cni-20220601121425-16804 --format={{.State.Status}}
	I0601 12:15:32.848906   30017 cli_runner.go:164] Run: docker container inspect newest-cni-20220601121425-16804 --format={{.State.Status}}
	I0601 12:15:32.848930   30017 cli_runner.go:164] Run: docker container inspect newest-cni-20220601121425-16804 --format={{.State.Status}}
	I0601 12:15:32.849156   30017 cli_runner.go:164] Run: docker container inspect newest-cni-20220601121425-16804 --format={{.State.Status}}
	I0601 12:15:32.874342   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:32.980743   30017 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220601121425-16804"
	W0601 12:15:32.992368   30017 addons.go:165] addon default-storageclass should already be in state true
	I0601 12:15:32.992341   30017 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0601 12:15:32.992427   30017 host.go:66] Checking if "newest-cni-20220601121425-16804" exists ...
	I0601 12:15:33.011979   30017 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0601 12:15:33.011996   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0601 12:15:33.012078   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:33.013569   30017 cli_runner.go:164] Run: docker container inspect newest-cni-20220601121425-16804 --format={{.State.Status}}
	I0601 12:15:33.037211   30017 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0601 12:15:33.068082   30017 api_server.go:51] waiting for apiserver process to appear ...
	I0601 12:15:33.111016   30017 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 12:15:33.111126   30017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 12:15:33.148302   30017 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0601 12:15:33.171397   30017 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 12:15:33.192123   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 12:15:33.184596   30017 api_server.go:71] duration metric: took 434.942794ms to wait for apiserver process to appear ...
	I0601 12:15:33.192147   30017 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0601 12:15:33.192164   30017 api_server.go:87] waiting for apiserver healthz status ...
	I0601 12:15:33.192166   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0601 12:15:33.192181   30017 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:63285/healthz ...
	I0601 12:15:33.192244   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:33.192273   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:33.206856   30017 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 12:15:33.206882   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 12:15:33.206858   30017 api_server.go:266] https://127.0.0.1:63285/healthz returned 200:
	ok
	I0601 12:15:33.206999   30017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220601121425-16804
	I0601 12:15:33.209882   30017 api_server.go:140] control plane version: v1.23.6
	I0601 12:15:33.209903   30017 api_server.go:130] duration metric: took 17.729398ms to wait for apiserver health ...
	I0601 12:15:33.209909   30017 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 12:15:33.212306   30017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63286 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601121425-16804/id_rsa Username:docker}
	I0601 12:15:33.220067   30017 system_pods.go:59] 8 kube-system pods found
	I0601 12:15:33.220104   30017 system_pods.go:61] "coredns-64897985d-j2plh" [3a8967e9-d37b-4f71-b57f-0b3a34dbdf08] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0601 12:15:33.220124   30017 system_pods.go:61] "etcd-newest-cni-20220601121425-16804" [c181135a-268d-4847-8dd4-ec0e0f06226e] Running
	I0601 12:15:33.220134   30017 system_pods.go:61] "kube-apiserver-newest-cni-20220601121425-16804" [30ec5624-7260-4516-a9b7-2befbb6626aa] Running
	I0601 12:15:33.220141   30017 system_pods.go:61] "kube-controller-manager-newest-cni-20220601121425-16804" [ecf69675-926e-41de-a951-ddc2afa7194b] Running
	I0601 12:15:33.220151   30017 system_pods.go:61] "kube-proxy-w4cvx" [8cd61f44-5d14-434c-a84e-ffd68ac7bc21] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0601 12:15:33.220167   30017 system_pods.go:61] "kube-scheduler-newest-cni-20220601121425-16804" [15357952-87e8-4636-8cdf-eb7113a0682b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0601 12:15:33.220181   30017 system_pods.go:61] "metrics-server-b955d9d8-x4szx" [caffaac7-3821-49eb-b2de-cc43c2d6c5c8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0601 12:15:33.220201   30017 system_pods.go:61] "storage-provisioner" [ef765f27-a5f6-468b-9428-8a223e30a190] Running
	I0601 12:15:33.220209   30017 system_pods.go:74] duration metric: took 10.294658ms to wait for pod list to return data ...
	I0601 12:15:33.220218   30017 default_sa.go:34] waiting for default service account to be created ...
	I0601 12:15:33.223744   30017 default_sa.go:45] found service account: "default"
	I0601 12:15:33.223760   30017 default_sa.go:55] duration metric: took 3.535466ms for default service account to be created ...
	I0601 12:15:33.223776   30017 kubeadm.go:572] duration metric: took 474.122479ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0601 12:15:33.223798   30017 node_conditions.go:102] verifying NodePressure condition ...
	I0601 12:15:33.228770   30017 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0601 12:15:33.228786   30017 node_conditions.go:123] node cpu capacity is 6
	I0601 12:15:33.228800   30017 node_conditions.go:105] duration metric: took 4.995287ms to run NodePressure ...
	I0601 12:15:33.228813   30017 start.go:213] waiting for startup goroutines ...
	I0601 12:15:33.301789   30017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63286 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601121425-16804/id_rsa Username:docker}
	I0601 12:15:33.313361   30017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63286 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601121425-16804/id_rsa Username:docker}
	I0601 12:15:33.319829   30017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63286 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/newest-cni-20220601121425-16804/id_rsa Username:docker}
	I0601 12:15:33.382558   30017 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0601 12:15:33.382572   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0601 12:15:33.463411   30017 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0601 12:15:33.463454   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0601 12:15:33.479163   30017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 12:15:33.479594   30017 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0601 12:15:33.479617   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0601 12:15:33.484722   30017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 12:15:33.491345   30017 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 12:15:33.491376   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0601 12:15:33.575793   30017 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0601 12:15:33.575859   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0601 12:15:33.593652   30017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0601 12:15:33.681482   30017 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0601 12:15:33.681500   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0601 12:15:33.857868   30017 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0601 12:15:33.857885   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0601 12:15:33.892748   30017 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0601 12:15:33.892767   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0601 12:15:33.984332   30017 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0601 12:15:33.984347   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0601 12:15:34.064156   30017 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0601 12:15:34.064169   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0601 12:15:34.086338   30017 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0601 12:15:34.086357   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0601 12:15:34.173318   30017 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 12:15:34.173333   30017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0601 12:15:34.196575   30017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0601 12:15:34.695837   30017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.216659486s)
	I0601 12:15:34.695873   30017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.211144239s)
	I0601 12:15:34.757351   30017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.163679978s)
	I0601 12:15:34.757383   30017 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20220601121425-16804"
	I0601 12:15:34.884487   30017 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0601 12:15:34.905569   30017 addons.go:417] enableAddons completed in 2.155912622s
	I0601 12:15:34.935673   30017 start.go:504] kubectl: 1.24.0, cluster: 1.23.6 (minor skew: 1)
	I0601 12:15:34.958503   30017 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220601121425-16804" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-01 19:15:19 UTC, end at Wed 2022-06-01 19:16:21 UTC. --
	Jun 01 19:15:19 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:15:19.760412273Z" level=info msg="API listen on [::]:2376"
	Jun 01 19:15:19 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:15:19.762927777Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 01 19:15:32 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:15:32.761109785Z" level=info msg="ignoring event" container=f5eb74069122586ecbb8de72491c40684c62e7e42c29260158bfda9e2d0e7b63 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:15:34 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:15:34.187838089Z" level=info msg="ignoring event" container=9ea85b42dfba0152e44de0d130bad25fcc7e706635d8b0a039c7d0ddd452726f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:15:34 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:15:34.265848931Z" level=info msg="ignoring event" container=ef21ce7592f74cc735034911681b88d2badc41c5d6de62fc86a3bf9d67b857fc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:15:35 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:15:35.504240231Z" level=info msg="ignoring event" container=25b1941367e50f0d3fe9a7a3c265b0ce6186d5dec187f15134bfd7ac87385063 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:15:35 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:15:35.511151753Z" level=info msg="ignoring event" container=c8006b995b7fe142c6ccb5186cda04cf5a7a1398be3c7cc07bfa514d22cecf1e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:15:36 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:15:36.515308761Z" level=info msg="ignoring event" container=fa56fc719d9b61059d0e92e514f32dad998f8ee008318a36de50d7e25bad4dc3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:15:36 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:15:36.529726523Z" level=info msg="ignoring event" container=8d71e51fbce728249dc93bc78ee243e2e415bb5e885320b14cae1d6b955f7d23 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:16:09 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:16:09.762985490Z" level=info msg="ignoring event" container=b3234ec468d6d6aba0b6482fd904c87428a735a22b9f36315007a3f9581a2889 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:16:13 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:16:13.177657282Z" level=info msg="ignoring event" container=3737543edaefcab7a88476bdfff80cb4edff1184b0801e3f7b3eb3bc00d3af76 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:16:14 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:16:14.088105851Z" level=info msg="ignoring event" container=7bb13062789beae21d787d65c242b7902afe1b34d6dc5e248a5e401cebfd9565 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:16:14 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:16:14.452484661Z" level=info msg="ignoring event" container=57f714b6b87a8fa9ca152d87b1d798a8ee8835e5ef85914d22dd7ff1f1d180f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:16:14 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:16:14.471196215Z" level=info msg="ignoring event" container=a8f93cf89a7d7fa11342862f94c11e0c0ae19b8c80422b584f05d21a701ed8de module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:16:14 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:16:14.886993504Z" level=info msg="ignoring event" container=26a97de280af9a74f67ca36222be81df55e4ba17963b29a45b439d8eaa2cf6a2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:16:16 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:16:16.139729594Z" level=info msg="ignoring event" container=8d76d511d78101d7f952aa9c6d2f05dee2f13b6d3d325e76b0ce5e75247579e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:16:16 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:16:16.336337114Z" level=info msg="ignoring event" container=aedc6e4fd277a6ebc33f1a54bcdff0b19f58f77407ca4816ea31322897731595 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:16:16 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:16:16.338638211Z" level=info msg="ignoring event" container=a39620c02df8f2e2b2a1ef0ad27c66b371051de41974fbcfaeecffdbfb08f83d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:16:16 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:16:16.342089465Z" level=info msg="ignoring event" container=4ef4f0c40bbc934d06ad431b49610db0042edf1aef1f8a1445f61b7b8eabc167 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:16:16 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:16:16.724162161Z" level=info msg="ignoring event" container=08b6e92873649b42f2650075c0232ffb2c8e47617441b49046de0bf6620a5e04 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:16:19 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:16:19.455239283Z" level=info msg="ignoring event" container=2efdf6250cf1f49f1ddc0d1e2d0743c2623c57b8254b0f476de8f66e0f916ead module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:16:19 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:16:19.567219341Z" level=info msg="ignoring event" container=8ee264e933c2c510698ef145bd1d29be4292867034be8c720885242f241bdbd9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:16:19 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:16:19.578555684Z" level=info msg="ignoring event" container=b014a7bb18c33cc329b356140155dfbc138dd5b0220e7398fe45c22f468773c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:16:19 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:16:19.597436941Z" level=info msg="ignoring event" container=7cb2b9369e6d41a3855d411d87707412144db05e9c847d54e6dc38cb365f5f0e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 01 19:16:21 newest-cni-20220601121425-16804 dockerd[130]: time="2022-06-01T19:16:21.341182825Z" level=info msg="ignoring event" container=387a6856ac399b236ebe072d688df1ef369618d758dcc68e8491cc2cb2eddcdb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	182178f313099       6e38f40d628db       9 seconds ago        Running             storage-provisioner       2                   bd3c8e7e8e4b8
	9254f80cdc825       4c03754524064       48 seconds ago       Running             kube-proxy                1                   6208716e43268
	b3234ec468d6d       6e38f40d628db       49 seconds ago       Exited              storage-provisioner       1                   bd3c8e7e8e4b8
	441fa795fe1c1       595f327f224a4       54 seconds ago       Running             kube-scheduler            1                   c3dc76afa24d2
	91d8cb5a153b0       25f8c7f3da61c       54 seconds ago       Running             etcd                      1                   36ae06fc610d0
	194b9a4c9c8ec       df7b72818ad2e       54 seconds ago       Running             kube-controller-manager   1                   a6b68b0e15635
	1fe6b955488a3       8fa62c12256df       54 seconds ago       Running             kube-apiserver            1                   81546674a3803
	0aace92ddb91e       4c03754524064       About a minute ago   Exited              kube-proxy                0                   e1445bd1efd3a
	f50e317e9858b       25f8c7f3da61c       About a minute ago   Exited              etcd                      0                   3a157a1c34579
	11c48b791323c       595f327f224a4       About a minute ago   Exited              kube-scheduler            0                   6ae49c2db4a05
	0b270245a55f2       df7b72818ad2e       About a minute ago   Exited              kube-controller-manager   0                   4787fe993ca14
	c410fd12249eb       8fa62c12256df       About a minute ago   Exited              kube-apiserver            0                   c862ef5005948
	
	* 
	* ==> describe nodes <==
	* Name:               newest-cni-20220601121425-16804
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-20220601121425-16804
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af273d6c1d2efba123f39c341ef4e1b2746b42f1
	                    minikube.k8s.io/name=newest-cni-20220601121425-16804
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T12_14_48_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 19:14:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-20220601121425-16804
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Jun 2022 19:16:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 19:16:10 +0000   Wed, 01 Jun 2022 19:14:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 19:16:10 +0000   Wed, 01 Jun 2022 19:14:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 19:16:10 +0000   Wed, 01 Jun 2022 19:14:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Jun 2022 19:16:10 +0000   Wed, 01 Jun 2022 19:16:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    newest-cni-20220601121425-16804
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0d7477b601740b2a7c32c13851e505c
	  System UUID:                b6b66e9d-2c51-4b0e-b036-bbe63b69343a
	  Boot ID:                    60fb2c64-72ec-41ec-9cdf-c18d3fde7c60
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      192.168.0.0/24
	PodCIDRs:                     192.168.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-j2plh                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     80s
	  kube-system                 etcd-newest-cni-20220601121425-16804                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         93s
	  kube-system                 kube-apiserver-newest-cni-20220601121425-16804             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-controller-manager-newest-cni-20220601121425-16804    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-proxy-w4cvx                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 kube-scheduler-newest-cni-20220601121425-16804             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 metrics-server-b955d9d8-x4szx                              100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         77s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-sfkqb                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  kubernetes-dashboard        kubernetes-dashboard-8469778f77-fpbtt                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 47s                  kube-proxy  
	  Normal  Starting                 79s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  100s (x5 over 100s)  kubelet     Node newest-cni-20220601121425-16804 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    100s (x5 over 100s)  kubelet     Node newest-cni-20220601121425-16804 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     100s (x4 over 100s)  kubelet     Node newest-cni-20220601121425-16804 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  100s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 100s                 kubelet     Starting kubelet.
	  Normal  NodeHasNoDiskPressure    93s                  kubelet     Node newest-cni-20220601121425-16804 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     93s                  kubelet     Node newest-cni-20220601121425-16804 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  93s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  93s                  kubelet     Node newest-cni-20220601121425-16804 status is now: NodeHasSufficientMemory
	  Normal  Starting                 93s                  kubelet     Starting kubelet.
	  Normal  NodeReady                83s                  kubelet     Node newest-cni-20220601121425-16804 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  55s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 55s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientPID     54s (x7 over 55s)    kubelet     Node newest-cni-20220601121425-16804 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    54s (x8 over 55s)    kubelet     Node newest-cni-20220601121425-16804 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  54s (x8 over 55s)    kubelet     Node newest-cni-20220601121425-16804 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  11s                  kubelet     Node newest-cni-20220601121425-16804 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11s                  kubelet     Node newest-cni-20220601121425-16804 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11s                  kubelet     Node newest-cni-20220601121425-16804 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             11s                  kubelet     Node newest-cni-20220601121425-16804 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  11s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                11s                  kubelet     Node newest-cni-20220601121425-16804 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [91d8cb5a153b] <==
	* {"level":"info","ts":"2022-06-01T19:15:27.925Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-01T19:15:27.925Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-01T19:15:27.925Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-01T19:15:27.925Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-01T19:15:27.925Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-01T19:15:29.608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 2"}
	{"level":"info","ts":"2022-06-01T19:15:29.608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-06-01T19:15:29.608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-01T19:15:29.608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 3"}
	{"level":"info","ts":"2022-06-01T19:15:29.608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-06-01T19:15:29.608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 3"}
	{"level":"info","ts":"2022-06-01T19:15:29.608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-06-01T19:15:29.609Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:newest-cni-20220601121425-16804 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T19:15:29.609Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T19:15:29.609Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T19:15:29.609Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T19:15:29.609Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T19:15:29.611Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-01T19:15:29.612Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"warn","ts":"2022-06-01T19:16:13.761Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"157.163712ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3238511584476931544 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-sfkqb.16f4952aa8a00c2f\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-sfkqb.16f4952aa8a00c2f\" value_size:659 lease:3238511584476931218 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2022-06-01T19:16:13.761Z","caller":"traceutil/trace.go:171","msg":"trace[1192227376] linearizableReadLoop","detail":"{readStateIndex:676; appliedIndex:675; }","duration":"147.544859ms","start":"2022-06-01T19:16:13.614Z","end":"2022-06-01T19:16:13.761Z","steps":["trace[1192227376] 'read index received'  (duration: 71.372231ms)","trace[1192227376] 'applied index is now lower than readState.Index'  (duration: 76.170968ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-01T19:16:13.762Z","caller":"traceutil/trace.go:171","msg":"trace[2075321186] transaction","detail":"{read_only:false; response_revision:638; number_of_response:1; }","duration":"158.324851ms","start":"2022-06-01T19:16:13.603Z","end":"2022-06-01T19:16:13.762Z","steps":["trace[2075321186] 'compare'  (duration: 156.960355ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-01T19:16:13.762Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"147.74423ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/coredns\" ","response":"range_response_count:1 size:217"}
	{"level":"info","ts":"2022-06-01T19:16:13.762Z","caller":"traceutil/trace.go:171","msg":"trace[630231128] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; response_count:1; response_revision:638; }","duration":"147.861269ms","start":"2022-06-01T19:16:13.614Z","end":"2022-06-01T19:16:13.762Z","steps":["trace[630231128] 'agreement among raft nodes before linearized reading'  (duration: 147.621933ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-01T19:16:15.471Z","caller":"traceutil/trace.go:171","msg":"trace[1168585189] transaction","detail":"{read_only:false; response_revision:648; number_of_response:1; }","duration":"167.931766ms","start":"2022-06-01T19:16:15.303Z","end":"2022-06-01T19:16:15.470Z","steps":["trace[1168585189] 'process raft request'  (duration: 167.597016ms)"],"step_count":1}
	
	* 
	* ==> etcd [f50e317e9858] <==
	* {"level":"info","ts":"2022-06-01T19:14:43.444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2022-06-01T19:14:43.444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-01T19:14:43.444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2022-06-01T19:14:43.444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-06-01T19:14:43.444Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:newest-cni-20220601121425-16804 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T19:14:43.444Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T19:14:43.445Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T19:14:43.445Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T19:14:43.445Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-01T19:14:43.444Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T19:14:43.445Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T19:14:43.445Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T19:14:43.445Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T19:14:43.446Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-06-01T19:14:43.446Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2022-06-01T19:14:46.426Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"110.586754ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:persistent-volume-provisioner\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2022-06-01T19:14:46.426Z","caller":"traceutil/trace.go:171","msg":"trace[1028010844] range","detail":"{range_begin:/registry/clusterroles/system:persistent-volume-provisioner; range_end:; response_count:0; response_revision:106; }","duration":"110.704074ms","start":"2022-06-01T19:14:46.315Z","end":"2022-06-01T19:14:46.426Z","steps":["trace[1028010844] 'agreement among raft nodes before linearized reading'  (duration: 35.574888ms)","trace[1028010844] 'range keys from in-memory index tree'  (duration: 74.994545ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-01T19:15:05.091Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-06-01T19:15:05.091Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"newest-cni-20220601121425-16804","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	WARNING: 2022/06/01 19:15:05 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/06/01 19:15:05 [core] grpc: addrConn.createTransport failed to connect to {192.168.58.2:2379 192.168.58.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.58.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-06-01T19:15:05.100Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b2c6679ac05f2cf1","current-leader-member-id":"b2c6679ac05f2cf1"}
	{"level":"info","ts":"2022-06-01T19:15:05.103Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-01T19:15:05.104Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-01T19:15:05.104Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"newest-cni-20220601121425-16804","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	
	* 
	* ==> kernel <==
	*  19:16:22 up  1:19,  0 users,  load average: 3.95, 1.43, 0.97
	Linux newest-cni-20220601121425-16804 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [1fe6b955488a] <==
	* I0601 19:15:31.158049       1 cache.go:39] Caches are synced for autoregister controller
	I0601 19:15:31.158063       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0601 19:15:31.158145       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0601 19:15:31.158220       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0601 19:15:31.158240       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0601 19:15:31.161963       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0601 19:15:32.016808       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0601 19:15:32.016842       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0601 19:15:32.022381       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	W0601 19:15:32.183900       1 handler_proxy.go:104] no RequestInfo found in the context
	E0601 19:15:32.183973       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0601 19:15:32.183985       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0601 19:15:32.639161       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0601 19:15:32.666668       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0601 19:15:32.690153       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0601 19:15:32.701160       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0601 19:15:32.726938       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0601 19:15:34.191769       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0601 19:15:34.710189       1 controller.go:611] quota admission added evaluator for: namespaces
	I0601 19:15:34.824734       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.110.233.219]
	I0601 19:15:34.833994       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.100.155.216]
	I0601 19:16:10.744215       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0601 19:16:11.053733       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0601 19:16:11.104111       1 controller.go:611] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-apiserver [c410fd12249e] <==
	* W0601 19:15:14.319298       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.322487       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.362789       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.372646       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.376937       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.396074       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.419030       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.432070       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.444038       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.497287       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.508350       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.526305       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.682507       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.724280       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.728793       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.744850       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.748317       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.793796       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.873863       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:14.992121       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:15.007051       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:15.013868       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:15.077985       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:15.132516       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0601 19:15:15.178203       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-controller-manager [0b270245a55f] <==
	* I0601 19:15:01.518897       1 shared_informer.go:247] Caches are synced for node 
	I0601 19:15:01.518937       1 range_allocator.go:173] Starting range CIDR allocator
	I0601 19:15:01.518941       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I0601 19:15:01.518949       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I0601 19:15:01.520916       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-j2plh"
	I0601 19:15:01.521459       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0601 19:15:01.524928       1 range_allocator.go:374] Set node newest-cni-20220601121425-16804 PodCIDR to [192.168.0.0/24]
	I0601 19:15:01.526165       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-6rmbc"
	I0601 19:15:01.534380       1 shared_informer.go:247] Caches are synced for namespace 
	I0601 19:15:01.603449       1 shared_informer.go:247] Caches are synced for attach detach 
	I0601 19:15:01.685793       1 shared_informer.go:247] Caches are synced for cronjob 
	I0601 19:15:01.701684       1 shared_informer.go:247] Caches are synced for TTL after finished 
	I0601 19:15:01.710686       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 19:15:01.726190       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0601 19:15:01.730567       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 19:15:01.732197       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0601 19:15:01.735978       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-6rmbc"
	I0601 19:15:01.739244       1 shared_informer.go:247] Caches are synced for job 
	I0601 19:15:02.152476       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 19:15:02.200578       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 19:15:02.200611       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0601 19:15:04.348086       1 event.go:294] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-b955d9d8 to 1"
	I0601 19:15:04.350818       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-b955d9d8-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0601 19:15:04.355507       1 replica_set.go:536] sync "kube-system/metrics-server-b955d9d8" failed with pods "metrics-server-b955d9d8-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0601 19:15:04.361133       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-b955d9d8-x4szx"
	
	* 
	* ==> kube-controller-manager [194b9a4c9c8e] <==
	* I0601 19:16:10.674733       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0601 19:16:10.678081       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0601 19:16:10.681469       1 shared_informer.go:247] Caches are synced for cronjob 
	I0601 19:16:10.701017       1 shared_informer.go:247] Caches are synced for deployment 
	I0601 19:16:10.701041       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0601 19:16:10.739121       1 shared_informer.go:247] Caches are synced for endpoint 
	I0601 19:16:10.741208       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0601 19:16:10.742535       1 shared_informer.go:247] Caches are synced for job 
	I0601 19:16:10.757909       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0601 19:16:10.808815       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 19:16:10.840290       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0601 19:16:10.850324       1 shared_informer.go:247] Caches are synced for stateful set 
	I0601 19:16:10.853026       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0601 19:16:10.855565       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0601 19:16:10.858822       1 shared_informer.go:247] Caches are synced for expand 
	I0601 19:16:10.862355       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 19:16:10.902665       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0601 19:16:10.952598       1 shared_informer.go:247] Caches are synced for attach detach 
	I0601 19:16:11.055999       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8469778f77 to 1"
	I0601 19:16:11.058053       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-56974995fc to 1"
	I0601 19:16:11.207239       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-sfkqb"
	I0601 19:16:11.208925       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-fpbtt"
	I0601 19:16:11.360947       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 19:16:11.369265       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 19:16:11.369294       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [0aace92ddb91] <==
	* I0601 19:15:02.205856       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0601 19:15:02.205944       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0601 19:15:02.205966       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 19:15:02.232561       1 server_others.go:206] "Using iptables Proxier"
	I0601 19:15:02.232634       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 19:15:02.232641       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 19:15:02.232651       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 19:15:02.233083       1 server.go:656] "Version info" version="v1.23.6"
	I0601 19:15:02.233992       1 config.go:317] "Starting service config controller"
	I0601 19:15:02.234036       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 19:15:02.234069       1 config.go:226] "Starting endpoint slice config controller"
	I0601 19:15:02.234072       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 19:15:02.336314       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0601 19:15:02.336324       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [9254f80cdc82] <==
	* I0601 19:15:34.091183       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0601 19:15:34.091251       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0601 19:15:34.091272       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 19:15:34.185368       1 server_others.go:206] "Using iptables Proxier"
	I0601 19:15:34.185418       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0601 19:15:34.185424       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0601 19:15:34.185927       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0601 19:15:34.186329       1 server.go:656] "Version info" version="v1.23.6"
	I0601 19:15:34.189304       1 config.go:317] "Starting service config controller"
	I0601 19:15:34.189318       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 19:15:34.189422       1 config.go:226] "Starting endpoint slice config controller"
	I0601 19:15:34.189428       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 19:15:34.290045       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0601 19:15:34.290061       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [11c48b791323] <==
	* E0601 19:14:45.330945       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0601 19:14:45.328569       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 19:14:45.330984       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0601 19:14:45.328834       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0601 19:14:45.330992       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0601 19:14:45.328846       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 19:14:45.331004       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0601 19:14:46.239492       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 19:14:46.239529       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0601 19:14:46.271497       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 19:14:46.271536       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 19:14:46.280448       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 19:14:46.280464       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0601 19:14:46.322763       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0601 19:14:46.322800       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0601 19:14:46.525387       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 19:14:46.525424       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0601 19:14:46.538416       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0601 19:14:46.538454       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0601 19:14:46.545691       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 19:14:46.545727       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0601 19:14:49.521996       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0601 19:15:05.101746       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0601 19:15:05.102734       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	I0601 19:15:05.102966       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	
	* 
	* ==> kube-scheduler [441fa795fe1c] <==
	* W0601 19:15:27.908158       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0601 19:15:28.530690       1 serving.go:348] Generated self-signed cert in-memory
	W0601 19:15:31.060714       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0601 19:15:31.060759       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0601 19:15:31.060768       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0601 19:15:31.060774       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0601 19:15:31.071235       1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.6"
	I0601 19:15:31.073227       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I0601 19:15:31.073298       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0601 19:15:31.073308       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0601 19:15:31.073353       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0601 19:15:31.083235       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0601 19:15:31.083286       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0601 19:15:31.083332       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0601 19:15:31.083355       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0601 19:15:31.083361       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0601 19:15:31.083368       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0601 19:15:31.083405       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0601 19:15:31.083453       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0601 19:15:31.173962       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-01 19:15:19 UTC, end at Wed 2022-06-01 19:16:24 UTC. --
	Jun 01 19:16:23 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:23.668871    3517 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"aa85ab3d8972d2c4be9a0d743ee247a863fec23b557310e08c2b2db0a23e0593\" network for pod \"dashboard-metrics-scraper-56974995fc-sfkqb\": networkPlugin cni failed to set up pod \"dashboard-metrics-scraper-56974995fc-sfkqb_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"aa85ab3d8972d2c4be9a0d743ee247a863fec23b557310e08c2b2db0a23e0593\" network for pod \"dashboard-metrics-scraper-56974995fc-sfkqb\": networkPlugin cni failed to teardown pod \"dashboard-metrics-scraper-56974995fc-sfkqb_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.34 -j CNI-d96b4ff4a125585d2e4c26bf -m comment --comment name: \"crio\" id: \"aa85ab3d8972d2c4be9a0d743ee
247a863fec23b557310e08c2b2db0a23e0593\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-d96b4ff4a125585d2e4c26bf':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-sfkqb"
	Jun 01 19:16:23 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:23.668896    3517 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"aa85ab3d8972d2c4be9a0d743ee247a863fec23b557310e08c2b2db0a23e0593\" network for pod \"dashboard-metrics-scraper-56974995fc-sfkqb\": networkPlugin cni failed to set up pod \"dashboard-metrics-scraper-56974995fc-sfkqb_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"aa85ab3d8972d2c4be9a0d743ee247a863fec23b557310e08c2b2db0a23e0593\" network for pod \"dashboard-metrics-scraper-56974995fc-sfkqb\": networkPlugin cni failed to teardown pod \"dashboard-metrics-scraper-56974995fc-sfkqb_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.34 -j CNI-d96b4ff4a125585d2e4c26bf -m comment --comment name: \"crio\" id: \"aa85ab3d8972d2c4be9a0d743ee
247a863fec23b557310e08c2b2db0a23e0593\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-d96b4ff4a125585d2e4c26bf':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-sfkqb"
	Jun 01 19:16:23 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:23.668949    3517 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dashboard-metrics-scraper-56974995fc-sfkqb_kubernetes-dashboard(1b46de0b-afdd-480c-b25b-a6ca05ecd307)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dashboard-metrics-scraper-56974995fc-sfkqb_kubernetes-dashboard(1b46de0b-afdd-480c-b25b-a6ca05ecd307)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"aa85ab3d8972d2c4be9a0d743ee247a863fec23b557310e08c2b2db0a23e0593\\\" network for pod \\\"dashboard-metrics-scraper-56974995fc-sfkqb\\\": networkPlugin cni failed to set up pod \\\"dashboard-metrics-scraper-56974995fc-sfkqb_kubernetes-dashboard\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"aa85ab3d8972d2c4be9a0d743ee247a863fec23b557310e08c2b2db0a23e0593\\\" network for pod \\\"dash
board-metrics-scraper-56974995fc-sfkqb\\\": networkPlugin cni failed to teardown pod \\\"dashboard-metrics-scraper-56974995fc-sfkqb_kubernetes-dashboard\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.34 -j CNI-d96b4ff4a125585d2e4c26bf -m comment --comment name: \\\"crio\\\" id: \\\"aa85ab3d8972d2c4be9a0d743ee247a863fec23b557310e08c2b2db0a23e0593\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-d96b4ff4a125585d2e4c26bf':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-sfkqb" podUID=1b46de0b-afdd-480c-b25b-a6ca05ecd307
	Jun 01 19:16:23 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:23.683096    3517 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"4ce21645bcce8f1d7b23ead7d45b4ad4e8ee23e756ae2090401242836c2fdc6d\" network for pod \"coredns-64897985d-j2plh\": networkPlugin cni failed to set up pod \"coredns-64897985d-j2plh_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"4ce21645bcce8f1d7b23ead7d45b4ad4e8ee23e756ae2090401242836c2fdc6d\" network for pod \"coredns-64897985d-j2plh\": networkPlugin cni failed to teardown pod \"coredns-64897985d-j2plh_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.35 -j CNI-3923492e5e917b03375c7bcd -m comment --comment name: \"crio\" id: \"4ce21645bcce8f1d7b23ead7d45b4ad4e8ee23e756ae2090401242836c2fdc6d\" --wait]: exit status 2: iptables v1.8.4 (legacy):
Couldn't load target `CNI-3923492e5e917b03375c7bcd':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	Jun 01 19:16:23 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:23.683157    3517 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"4ce21645bcce8f1d7b23ead7d45b4ad4e8ee23e756ae2090401242836c2fdc6d\" network for pod \"coredns-64897985d-j2plh\": networkPlugin cni failed to set up pod \"coredns-64897985d-j2plh_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"4ce21645bcce8f1d7b23ead7d45b4ad4e8ee23e756ae2090401242836c2fdc6d\" network for pod \"coredns-64897985d-j2plh\": networkPlugin cni failed to teardown pod \"coredns-64897985d-j2plh_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.35 -j CNI-3923492e5e917b03375c7bcd -m comment --comment name: \"crio\" id: \"4ce21645bcce8f1d7b23ead7d45b4ad4e8ee23e756ae2090401242836c2fdc6d\" --wait]: exit status 2: iptables v1.8.4 (legacy): Coul
dn't load target `CNI-3923492e5e917b03375c7bcd':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/coredns-64897985d-j2plh"
	Jun 01 19:16:23 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:23.683237    3517 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"4ce21645bcce8f1d7b23ead7d45b4ad4e8ee23e756ae2090401242836c2fdc6d\" network for pod \"coredns-64897985d-j2plh\": networkPlugin cni failed to set up pod \"coredns-64897985d-j2plh_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"4ce21645bcce8f1d7b23ead7d45b4ad4e8ee23e756ae2090401242836c2fdc6d\" network for pod \"coredns-64897985d-j2plh\": networkPlugin cni failed to teardown pod \"coredns-64897985d-j2plh_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.35 -j CNI-3923492e5e917b03375c7bcd -m comment --comment name: \"crio\" id: \"4ce21645bcce8f1d7b23ead7d45b4ad4e8ee23e756ae2090401242836c2fdc6d\" --wait]: exit status 2: iptables v1.8.4 (legacy): Coul
dn't load target `CNI-3923492e5e917b03375c7bcd':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/coredns-64897985d-j2plh"
	Jun 01 19:16:23 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:23.683300    3517 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-64897985d-j2plh_kube-system(3a8967e9-d37b-4f71-b57f-0b3a34dbdf08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-64897985d-j2plh_kube-system(3a8967e9-d37b-4f71-b57f-0b3a34dbdf08)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"4ce21645bcce8f1d7b23ead7d45b4ad4e8ee23e756ae2090401242836c2fdc6d\\\" network for pod \\\"coredns-64897985d-j2plh\\\": networkPlugin cni failed to set up pod \\\"coredns-64897985d-j2plh_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"4ce21645bcce8f1d7b23ead7d45b4ad4e8ee23e756ae2090401242836c2fdc6d\\\" network for pod \\\"coredns-64897985d-j2plh\\\": networkPlugin cni failed to teardown pod \\\"coredns-64897985d-j2plh_kube-syst
em\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.35 -j CNI-3923492e5e917b03375c7bcd -m comment --comment name: \\\"crio\\\" id: \\\"4ce21645bcce8f1d7b23ead7d45b4ad4e8ee23e756ae2090401242836c2fdc6d\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-3923492e5e917b03375c7bcd':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/coredns-64897985d-j2plh" podUID=3a8967e9-d37b-4f71-b57f-0b3a34dbdf08
	Jun 01 19:16:23 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:23.684437    3517 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"7abea46f7ae0155e73cc8f8bba8d1b3a02ff641299b42a03c5cc10801710caa6\" network for pod \"metrics-server-b955d9d8-x4szx\": networkPlugin cni failed to set up pod \"metrics-server-b955d9d8-x4szx_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"7abea46f7ae0155e73cc8f8bba8d1b3a02ff641299b42a03c5cc10801710caa6\" network for pod \"metrics-server-b955d9d8-x4szx\": networkPlugin cni failed to teardown pod \"metrics-server-b955d9d8-x4szx_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.36 -j CNI-3820d299f687248f3741f27b -m comment --comment name: \"crio\" id: \"7abea46f7ae0155e73cc8f8bba8d1b3a02ff641299b42a03c5cc10801710caa6\" --wait]: exit status 2: i
ptables v1.8.4 (legacy): Couldn't load target `CNI-3820d299f687248f3741f27b':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	Jun 01 19:16:23 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:23.684530    3517 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"7abea46f7ae0155e73cc8f8bba8d1b3a02ff641299b42a03c5cc10801710caa6\" network for pod \"metrics-server-b955d9d8-x4szx\": networkPlugin cni failed to set up pod \"metrics-server-b955d9d8-x4szx_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"7abea46f7ae0155e73cc8f8bba8d1b3a02ff641299b42a03c5cc10801710caa6\" network for pod \"metrics-server-b955d9d8-x4szx\": networkPlugin cni failed to teardown pod \"metrics-server-b955d9d8-x4szx_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.36 -j CNI-3820d299f687248f3741f27b -m comment --comment name: \"crio\" id: \"7abea46f7ae0155e73cc8f8bba8d1b3a02ff641299b42a03c5cc10801710caa6\" --wait]: exit status 2: iptabl
es v1.8.4 (legacy): Couldn't load target `CNI-3820d299f687248f3741f27b':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/metrics-server-b955d9d8-x4szx"
	Jun 01 19:16:23 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:23.684568    3517 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"7abea46f7ae0155e73cc8f8bba8d1b3a02ff641299b42a03c5cc10801710caa6\" network for pod \"metrics-server-b955d9d8-x4szx\": networkPlugin cni failed to set up pod \"metrics-server-b955d9d8-x4szx_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"7abea46f7ae0155e73cc8f8bba8d1b3a02ff641299b42a03c5cc10801710caa6\" network for pod \"metrics-server-b955d9d8-x4szx\": networkPlugin cni failed to teardown pod \"metrics-server-b955d9d8-x4szx_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.36 -j CNI-3820d299f687248f3741f27b -m comment --comment name: \"crio\" id: \"7abea46f7ae0155e73cc8f8bba8d1b3a02ff641299b42a03c5cc10801710caa6\" --wait]: exit status 2: iptabl
es v1.8.4 (legacy): Couldn't load target `CNI-3820d299f687248f3741f27b':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/metrics-server-b955d9d8-x4szx"
	Jun 01 19:16:23 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:23.684633    3517 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"metrics-server-b955d9d8-x4szx_kube-system(caffaac7-3821-49eb-b2de-cc43c2d6c5c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"metrics-server-b955d9d8-x4szx_kube-system(caffaac7-3821-49eb-b2de-cc43c2d6c5c8)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"7abea46f7ae0155e73cc8f8bba8d1b3a02ff641299b42a03c5cc10801710caa6\\\" network for pod \\\"metrics-server-b955d9d8-x4szx\\\": networkPlugin cni failed to set up pod \\\"metrics-server-b955d9d8-x4szx_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"7abea46f7ae0155e73cc8f8bba8d1b3a02ff641299b42a03c5cc10801710caa6\\\" network for pod \\\"metrics-server-b955d9d8-x4szx\\\": networkPlugin cni failed to teardown pod \\\"met
rics-server-b955d9d8-x4szx_kube-system\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.36 -j CNI-3820d299f687248f3741f27b -m comment --comment name: \\\"crio\\\" id: \\\"7abea46f7ae0155e73cc8f8bba8d1b3a02ff641299b42a03c5cc10801710caa6\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-3820d299f687248f3741f27b':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/metrics-server-b955d9d8-x4szx" podUID=caffaac7-3821-49eb-b2de-cc43c2d6c5c8
	Jun 01 19:16:23 newest-cni-20220601121425-16804 kubelet[3517]: I0601 19:16:23.686393    3517 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"metrics-server-b955d9d8-x4szx_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"7abea46f7ae0155e73cc8f8bba8d1b3a02ff641299b42a03c5cc10801710caa6\""
	Jun 01 19:16:23 newest-cni-20220601121425-16804 kubelet[3517]: I0601 19:16:23.695030    3517 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"dashboard-metrics-scraper-56974995fc-sfkqb_kubernetes-dashboard\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"aa85ab3d8972d2c4be9a0d743ee247a863fec23b557310e08c2b2db0a23e0593\""
	Jun 01 19:16:23 newest-cni-20220601121425-16804 kubelet[3517]: I0601 19:16:23.702836    3517 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"aa85ab3d8972d2c4be9a0d743ee247a863fec23b557310e08c2b2db0a23e0593\""
	Jun 01 19:16:23 newest-cni-20220601121425-16804 kubelet[3517]: I0601 19:16:23.706878    3517 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"coredns-64897985d-j2plh_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"4ce21645bcce8f1d7b23ead7d45b4ad4e8ee23e756ae2090401242836c2fdc6d\""
	Jun 01 19:16:23 newest-cni-20220601121425-16804 kubelet[3517]: I0601 19:16:23.747230    3517 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"4ce21645bcce8f1d7b23ead7d45b4ad4e8ee23e756ae2090401242836c2fdc6d\""
	Jun 01 19:16:23 newest-cni-20220601121425-16804 kubelet[3517]: I0601 19:16:23.950856    3517 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"kubernetes-dashboard-8469778f77-fpbtt_kubernetes-dashboard\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"7d4165a33d2897675154c5923e8334ea75df505f5207c1e0bfcff47083835415\""
	Jun 01 19:16:23 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:23.950877    3517 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"7d4165a33d2897675154c5923e8334ea75df505f5207c1e0bfcff47083835415\" network for pod \"kubernetes-dashboard-8469778f77-fpbtt\": networkPlugin cni failed to set up pod \"kubernetes-dashboard-8469778f77-fpbtt_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"7d4165a33d2897675154c5923e8334ea75df505f5207c1e0bfcff47083835415\" network for pod \"kubernetes-dashboard-8469778f77-fpbtt\": networkPlugin cni failed to teardown pod \"kubernetes-dashboard-8469778f77-fpbtt_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.37 -j CNI-af36bcbe26472ab57774a3d0 -m comment --comment name: \"crio\" id: \"7d4165a33d2897675154c5923e8334ea75df505f52
07c1e0bfcff47083835415\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-af36bcbe26472ab57774a3d0':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	Jun 01 19:16:23 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:23.950955    3517 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"7d4165a33d2897675154c5923e8334ea75df505f5207c1e0bfcff47083835415\" network for pod \"kubernetes-dashboard-8469778f77-fpbtt\": networkPlugin cni failed to set up pod \"kubernetes-dashboard-8469778f77-fpbtt_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"7d4165a33d2897675154c5923e8334ea75df505f5207c1e0bfcff47083835415\" network for pod \"kubernetes-dashboard-8469778f77-fpbtt\": networkPlugin cni failed to teardown pod \"kubernetes-dashboard-8469778f77-fpbtt_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.37 -j CNI-af36bcbe26472ab57774a3d0 -m comment --comment name: \"crio\" id: \"7d4165a33d2897675154c5923e8334ea75df505f5207c1e
0bfcff47083835415\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-af36bcbe26472ab57774a3d0':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-fpbtt"
	Jun 01 19:16:23 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:23.950984    3517 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"7d4165a33d2897675154c5923e8334ea75df505f5207c1e0bfcff47083835415\" network for pod \"kubernetes-dashboard-8469778f77-fpbtt\": networkPlugin cni failed to set up pod \"kubernetes-dashboard-8469778f77-fpbtt_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"7d4165a33d2897675154c5923e8334ea75df505f5207c1e0bfcff47083835415\" network for pod \"kubernetes-dashboard-8469778f77-fpbtt\": networkPlugin cni failed to teardown pod \"kubernetes-dashboard-8469778f77-fpbtt_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.37 -j CNI-af36bcbe26472ab57774a3d0 -m comment --comment name: \"crio\" id: \"7d4165a33d2897675154c5923e8334ea75df505f5207c1e
0bfcff47083835415\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-af36bcbe26472ab57774a3d0':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-fpbtt"
	Jun 01 19:16:23 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:23.951031    3517 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kubernetes-dashboard-8469778f77-fpbtt_kubernetes-dashboard(7f9573e4-0789-4d55-8543-67d3ad9d92c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kubernetes-dashboard-8469778f77-fpbtt_kubernetes-dashboard(7f9573e4-0789-4d55-8543-67d3ad9d92c2)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"7d4165a33d2897675154c5923e8334ea75df505f5207c1e0bfcff47083835415\\\" network for pod \\\"kubernetes-dashboard-8469778f77-fpbtt\\\": networkPlugin cni failed to set up pod \\\"kubernetes-dashboard-8469778f77-fpbtt_kubernetes-dashboard\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"7d4165a33d2897675154c5923e8334ea75df505f5207c1e0bfcff47083835415\\\" network for pod \\\"kubernetes-dashboard-846
9778f77-fpbtt\\\": networkPlugin cni failed to teardown pod \\\"kubernetes-dashboard-8469778f77-fpbtt_kubernetes-dashboard\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.37 -j CNI-af36bcbe26472ab57774a3d0 -m comment --comment name: \\\"crio\\\" id: \\\"7d4165a33d2897675154c5923e8334ea75df505f5207c1e0bfcff47083835415\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-af36bcbe26472ab57774a3d0':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kubernetes-dashboard/kubernetes-dashboard-8469778f77-fpbtt" podUID=7f9573e4-0789-4d55-8543-67d3ad9d92c2
	Jun 01 19:16:24 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:24.349211    3517 cni.go:362] "Error adding pod to network" err="failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-64897985d-j2plh" podSandboxID={Type:docker ID:69c05db001c33732f8eeb4e1351e8e916f25a5714fc92ac9777774c9c2697485} podNetnsPath="/proc/7982/ns/net" networkType="bridge" networkName="crio"
	Jun 01 19:16:24 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:24.352712    3517 cni.go:362] "Error adding pod to network" err="failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-sfkqb" podSandboxID={Type:docker ID:95d76f51c6953557f32df93f848fcfea344156ed56f82566e0a0c00291a3b476} podNetnsPath="/proc/7977/ns/net" networkType="bridge" networkName="crio"
	Jun 01 19:16:24 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:24.383079    3517 cni.go:381] "Error deleting pod from network" err="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.38 -j CNI-ee4da8361ce7933737724ee0 -m comment --comment name: \"crio\" id: \"69c05db001c33732f8eeb4e1351e8e916f25a5714fc92ac9777774c9c2697485\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-ee4da8361ce7933737724ee0':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" pod="kube-system/coredns-64897985d-j2plh" podSandboxID={Type:docker ID:69c05db001c33732f8eeb4e1351e8e916f25a5714fc92ac9777774c9c2697485} podNetnsPath="/proc/7982/ns/net" networkType="bridge" networkName="crio"
	Jun 01 19:16:24 newest-cni-20220601121425-16804 kubelet[3517]: E0601 19:16:24.387309    3517 cni.go:381] "Error deleting pod from network" err="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.39 -j CNI-7bc0492baae4bfaadc45db75 -m comment --comment name: \"crio\" id: \"95d76f51c6953557f32df93f848fcfea344156ed56f82566e0a0c00291a3b476\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-7bc0492baae4bfaadc45db75':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-sfkqb" podSandboxID={Type:docker ID:95d76f51c6953557f32df93f848fcfea344156ed56f82566e0a0c00291a3b476} podNetnsPath="/proc/7977/ns/net" networkType="bridge" networkName="crio"
	
	* 
	* ==> storage-provisioner [182178f31309] <==
	* I0601 19:16:13.178815       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0601 19:16:13.190314       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0601 19:16:13.190467       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	* 
	* ==> storage-provisioner [b3234ec468d6] <==
	* I0601 19:15:32.998145       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0601 19:16:09.651018       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220601121425-16804 -n newest-cni-20220601121425-16804
helpers_test.go:261: (dbg) Run:  kubectl --context newest-cni-20220601121425-16804 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-64897985d-j2plh metrics-server-b955d9d8-x4szx dashboard-metrics-scraper-56974995fc-sfkqb kubernetes-dashboard-8469778f77-fpbtt
helpers_test.go:272: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context newest-cni-20220601121425-16804 describe pod coredns-64897985d-j2plh metrics-server-b955d9d8-x4szx dashboard-metrics-scraper-56974995fc-sfkqb kubernetes-dashboard-8469778f77-fpbtt
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context newest-cni-20220601121425-16804 describe pod coredns-64897985d-j2plh metrics-server-b955d9d8-x4szx dashboard-metrics-scraper-56974995fc-sfkqb kubernetes-dashboard-8469778f77-fpbtt: exit status 1 (243.450608ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-j2plh" not found
	Error from server (NotFound): pods "metrics-server-b955d9d8-x4szx" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-sfkqb" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8469778f77-fpbtt" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context newest-cni-20220601121425-16804 describe pod coredns-64897985d-j2plh metrics-server-b955d9d8-x4szx dashboard-metrics-scraper-56974995fc-sfkqb kubernetes-dashboard-8469778f77-fpbtt: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (50.12s)

                                                
                                    

Test pass (247/288)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 8.26
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.31
10 TestDownloadOnly/v1.23.6/json-events 3.04
14 TestDownloadOnly/v1.23.6/kubectl 0
15 TestDownloadOnly/v1.23.6/LogsDuration 0.29
16 TestDownloadOnly/DeleteAll 0.76
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.47
19 TestBinaryMirror 5.19
20 TestOffline 44.42
22 TestAddons/Setup 87.85
26 TestAddons/parallel/MetricsServer 5.77
27 TestAddons/parallel/HelmTiller 13.22
29 TestAddons/parallel/CSI 44.95
31 TestAddons/serial/GCPAuth 14.73
32 TestAddons/StoppedEnableDisable 12.96
33 TestCertOptions 28.73
34 TestCertExpiration 418.27
35 TestDockerFlags 29.39
36 TestForceSystemdFlag 33.59
37 TestForceSystemdEnv 30.25
39 TestHyperKitDriverInstallOrUpdate 6.28
42 TestErrorSpam/setup 25.55
43 TestErrorSpam/start 2.18
44 TestErrorSpam/status 1.39
45 TestErrorSpam/pause 1.94
46 TestErrorSpam/unpause 2.11
47 TestErrorSpam/stop 13.26
50 TestFunctional/serial/CopySyncFile 0
51 TestFunctional/serial/StartWithProxy 40.91
52 TestFunctional/serial/AuditLog 0
53 TestFunctional/serial/SoftStart 6.48
54 TestFunctional/serial/KubeContext 0.03
55 TestFunctional/serial/KubectlGetPods 1.49
58 TestFunctional/serial/CacheCmd/cache/add_remote 4.36
59 TestFunctional/serial/CacheCmd/cache/add_local 1.88
60 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.08
61 TestFunctional/serial/CacheCmd/cache/list 0.08
62 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.49
63 TestFunctional/serial/CacheCmd/cache/cache_reload 2.41
64 TestFunctional/serial/CacheCmd/cache/delete 0.15
65 TestFunctional/serial/MinikubeKubectlCmd 0.51
66 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.66
67 TestFunctional/serial/ExtraConfig 30.31
68 TestFunctional/serial/ComponentHealth 0.05
69 TestFunctional/serial/LogsCmd 3.23
70 TestFunctional/serial/LogsFileCmd 3.33
72 TestFunctional/parallel/ConfigCmd 0.47
73 TestFunctional/parallel/DashboardCmd 8.69
74 TestFunctional/parallel/DryRun 1.89
75 TestFunctional/parallel/InternationalLanguage 0.62
76 TestFunctional/parallel/StatusCmd 1.44
79 TestFunctional/parallel/ServiceCmd 13.39
81 TestFunctional/parallel/AddonsCmd 0.3
82 TestFunctional/parallel/PersistentVolumeClaim 26.3
84 TestFunctional/parallel/SSHCmd 1
85 TestFunctional/parallel/CpCmd 1.81
86 TestFunctional/parallel/MySQL 22.19
87 TestFunctional/parallel/FileSync 0.47
88 TestFunctional/parallel/CertSync 2.86
92 TestFunctional/parallel/NodeLabels 0.05
94 TestFunctional/parallel/NonActiveRuntimeDisabled 0.45
96 TestFunctional/parallel/Version/short 0.11
97 TestFunctional/parallel/Version/components 0.76
98 TestFunctional/parallel/ImageCommands/ImageListShort 0.4
99 TestFunctional/parallel/ImageCommands/ImageListTable 0.35
100 TestFunctional/parallel/ImageCommands/ImageListJson 0.35
101 TestFunctional/parallel/ImageCommands/ImageListYaml 0.41
102 TestFunctional/parallel/ImageCommands/ImageBuild 3.87
103 TestFunctional/parallel/ImageCommands/Setup 1.92
104 TestFunctional/parallel/DockerEnv/bash 1.79
105 TestFunctional/parallel/UpdateContextCmd/no_changes 0.34
106 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.44
107 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.3
108 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.67
109 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.51
110 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.39
111 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.06
112 TestFunctional/parallel/ImageCommands/ImageRemove 0.83
113 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.91
114 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.73
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.63
116 TestFunctional/parallel/ProfileCmd/profile_list 0.55
117 TestFunctional/parallel/ProfileCmd/profile_json_output 0.64
119 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
121 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.17
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
123 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
127 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
128 TestFunctional/parallel/MountCmd/any-port 9.27
129 TestFunctional/parallel/MountCmd/specific-port 3.08
130 TestFunctional/delete_addon-resizer_images 0.21
131 TestFunctional/delete_my-image_image 0.08
132 TestFunctional/delete_minikube_cached_images 0.08
142 TestJSONOutput/start/Command 39.25
143 TestJSONOutput/start/Audit 0
145 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
146 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
148 TestJSONOutput/pause/Command 0.71
149 TestJSONOutput/pause/Audit 0
151 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
152 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
154 TestJSONOutput/unpause/Command 0.7
155 TestJSONOutput/unpause/Audit 0
157 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
158 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
160 TestJSONOutput/stop/Command 12.43
161 TestJSONOutput/stop/Audit 0
163 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
165 TestErrorJSONOutput 0.76
167 TestKicCustomNetwork/create_custom_network 26.95
168 TestKicCustomNetwork/use_default_bridge_network 26.53
169 TestKicExistingNetwork 26.46
170 TestKicCustomSubnet 26.85
171 TestMainNoArgs 0.07
172 TestMinikubeProfile 56.32
175 TestMountStart/serial/StartWithMountFirst 7.51
176 TestMountStart/serial/VerifyMountFirst 0.43
177 TestMountStart/serial/StartWithMountSecond 7.22
178 TestMountStart/serial/VerifyMountSecond 0.44
179 TestMountStart/serial/DeleteFirst 2.45
180 TestMountStart/serial/VerifyMountPostDelete 0.43
181 TestMountStart/serial/Stop 1.63
182 TestMountStart/serial/RestartStopped 4.93
183 TestMountStart/serial/VerifyMountPostStop 0.43
186 TestMultiNode/serial/FreshStart2Nodes 70.58
187 TestMultiNode/serial/DeployApp2Nodes 5.9
188 TestMultiNode/serial/PingHostFrom2Pods 0.83
189 TestMultiNode/serial/AddNode 26.23
190 TestMultiNode/serial/ProfileList 0.53
191 TestMultiNode/serial/CopyFile 17.03
192 TestMultiNode/serial/StopNode 14.22
193 TestMultiNode/serial/StartAfterStop 25.3
194 TestMultiNode/serial/RestartKeepsNodes 120.31
195 TestMultiNode/serial/DeleteNode 19.04
196 TestMultiNode/serial/StopMultiNode 25.32
197 TestMultiNode/serial/RestartMultiNode 60.42
198 TestMultiNode/serial/ValidateNameConflict 28.62
204 TestScheduledStopUnix 98.85
205 TestSkaffold 58.77
207 TestInsufficientStorage 13.42
223 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 6.26
224 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 8.53
225 TestStoppedBinaryUpgrade/Setup 0.49
228 TestPause/serial/Start 48.18
229 TestStoppedBinaryUpgrade/MinikubeLogs 3.71
238 TestNoKubernetes/serial/StartNoK8sWithVersion 0.63
239 TestNoKubernetes/serial/StartWithK8s 25.53
240 TestPause/serial/SecondStartNoReconfiguration 6.61
241 TestPause/serial/Pause 0.83
243 TestNoKubernetes/serial/StartWithStopK8s 17.08
244 TestNoKubernetes/serial/Start 6.42
245 TestNoKubernetes/serial/VerifyK8sNotRunning 0.42
246 TestNoKubernetes/serial/ProfileList 33.14
247 TestNoKubernetes/serial/Stop 1.65
248 TestNoKubernetes/serial/StartNoArgs 4.53
249 TestNetworkPlugins/group/auto/Start 286.62
250 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.44
251 TestNetworkPlugins/group/kindnet/Start 46.55
252 TestNetworkPlugins/group/kindnet/ControllerPod 5.01
253 TestNetworkPlugins/group/kindnet/KubeletFlags 0.43
254 TestNetworkPlugins/group/kindnet/NetCatPod 12.66
255 TestNetworkPlugins/group/kindnet/DNS 0.11
256 TestNetworkPlugins/group/kindnet/Localhost 0.1
257 TestNetworkPlugins/group/kindnet/HairPin 0.1
258 TestNetworkPlugins/group/cilium/Start 78.76
259 TestNetworkPlugins/group/cilium/ControllerPod 5.02
260 TestNetworkPlugins/group/cilium/KubeletFlags 0.44
261 TestNetworkPlugins/group/cilium/NetCatPod 12.11
262 TestNetworkPlugins/group/cilium/DNS 0.12
263 TestNetworkPlugins/group/cilium/Localhost 0.11
264 TestNetworkPlugins/group/cilium/HairPin 0.1
265 TestNetworkPlugins/group/calico/Start 67.98
266 TestNetworkPlugins/group/calico/ControllerPod 5.02
267 TestNetworkPlugins/group/calico/KubeletFlags 0.52
268 TestNetworkPlugins/group/calico/NetCatPod 12.31
269 TestNetworkPlugins/group/calico/DNS 0.12
270 TestNetworkPlugins/group/calico/Localhost 0.11
271 TestNetworkPlugins/group/calico/HairPin 0.11
272 TestNetworkPlugins/group/false/Start 77.49
273 TestNetworkPlugins/group/auto/KubeletFlags 0.46
274 TestNetworkPlugins/group/auto/NetCatPod 11.71
275 TestNetworkPlugins/group/auto/DNS 0.14
276 TestNetworkPlugins/group/auto/Localhost 0.13
277 TestNetworkPlugins/group/auto/HairPin 5.11
278 TestNetworkPlugins/group/bridge/Start 40.89
279 TestNetworkPlugins/group/false/KubeletFlags 0.46
280 TestNetworkPlugins/group/false/NetCatPod 13.05
281 TestNetworkPlugins/group/bridge/KubeletFlags 0.45
282 TestNetworkPlugins/group/bridge/NetCatPod 11.7
283 TestNetworkPlugins/group/false/DNS 0.12
284 TestNetworkPlugins/group/false/Localhost 0.12
285 TestNetworkPlugins/group/false/HairPin 5.12
286 TestNetworkPlugins/group/bridge/DNS 0.13
287 TestNetworkPlugins/group/bridge/Localhost 0.12
288 TestNetworkPlugins/group/bridge/HairPin 0.13
289 TestNetworkPlugins/group/enable-default-cni/Start 248.82
290 TestNetworkPlugins/group/kubenet/Start 76.21
291 TestNetworkPlugins/group/kubenet/KubeletFlags 0.43
292 TestNetworkPlugins/group/kubenet/NetCatPod 10.67
293 TestNetworkPlugins/group/kubenet/DNS 0.12
294 TestNetworkPlugins/group/kubenet/Localhost 0.11
295 TestNetworkPlugins/group/kubenet/HairPin 0.11
298 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.46
299 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.63
300 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
301 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
302 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
304 TestStartStop/group/no-preload/serial/FirstStart 50.25
305 TestStartStop/group/no-preload/serial/DeployApp 10.74
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.73
307 TestStartStop/group/no-preload/serial/Stop 12.65
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.32
309 TestStartStop/group/no-preload/serial/SecondStart 336.79
312 TestStartStop/group/old-k8s-version/serial/Stop 1.66
313 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.34
315 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.01
316 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.66
317 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.47
320 TestStartStop/group/embed-certs/serial/FirstStart 40.98
321 TestStartStop/group/embed-certs/serial/DeployApp 9.73
322 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.69
323 TestStartStop/group/embed-certs/serial/Stop 12.58
324 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.33
325 TestStartStop/group/embed-certs/serial/SecondStart 338.46
327 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 7.03
328 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.64
329 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.48
332 TestStartStop/group/default-k8s-different-port/serial/FirstStart 41.17
333 TestStartStop/group/default-k8s-different-port/serial/DeployApp 10.71
334 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.78
335 TestStartStop/group/default-k8s-different-port/serial/Stop 12.66
336 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.33
337 TestStartStop/group/default-k8s-different-port/serial/SecondStart 336.11
339 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 8.02
340 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 6.59
341 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.48
344 TestStartStop/group/newest-cni/serial/FirstStart 38.38
345 TestStartStop/group/newest-cni/serial/DeployApp 0
346 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.83
347 TestStartStop/group/newest-cni/serial/Stop 12.74
348 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.33
349 TestStartStop/group/newest-cni/serial/SecondStart 17.93
350 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.48
x
+
TestDownloadOnly/v1.16.0/json-events (8.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220601105717-16804 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220601105717-16804 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (8.256815019s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (8.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220601105717-16804
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220601105717-16804: exit status 85 (313.751845ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 10:57:18
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 10:57:18.040818   16816 out.go:296] Setting OutFile to fd 1 ...
	I0601 10:57:18.041545   16816 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:57:18.041874   16816 out.go:309] Setting ErrFile to fd 2...
	I0601 10:57:18.041889   16816 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:57:18.042153   16816 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	W0601 10:57:18.042283   16816 root.go:300] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/config/config.json: no such file or directory
	I0601 10:57:18.042795   16816 out.go:303] Setting JSON to true
	I0601 10:57:18.059552   16816 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":5208,"bootTime":1654101030,"procs":352,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 10:57:18.059690   16816 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 10:57:18.082329   16816 out.go:97] [download-only-20220601105717-16804] minikube v1.26.0-beta.1 on Darwin 12.4
	W0601 10:57:18.082497   16816 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball: no such file or directory
	I0601 10:57:18.082548   16816 notify.go:193] Checking for updates...
	I0601 10:57:18.102367   16816 out.go:169] MINIKUBE_LOCATION=14079
	I0601 10:57:18.123377   16816 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 10:57:18.165386   16816 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 10:57:18.207308   16816 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 10:57:18.249303   16816 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	W0601 10:57:18.291342   16816 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0601 10:57:18.291572   16816 driver.go:358] Setting default libvirt URI to qemu:///system
	W0601 10:57:18.358285   16816 docker.go:113] docker version returned error: exit status 1
	I0601 10:57:18.379514   16816 out.go:97] Using the docker driver based on user configuration
	I0601 10:57:18.379552   16816 start.go:284] selected driver: docker
	I0601 10:57:18.379560   16816 start.go:806] validating driver "docker" against <nil>
	I0601 10:57:18.379667   16816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 10:57:18.504840   16816 info.go:265] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SB
OM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 10:57:18.526574   16816 out.go:169] - Ensure your docker daemon has access to enough CPU/memory resources.
	I0601 10:57:18.547303   16816 out.go:169] - Docs https://docs.docker.com/docker-for-mac/#resources
	I0601 10:57:18.589444   16816 out.go:169] 
	W0601 10:57:18.610355   16816 out_reason.go:110] Requested cpu count 2 is greater than the available cpus of 0
	I0601 10:57:18.631176   16816 out.go:169] 
	I0601 10:57:18.673440   16816 out.go:169] 
	W0601 10:57:18.694244   16816 out_reason.go:110] Docker Desktop has less than 2 CPUs configured, but Kubernetes requires at least 2 to be available
	W0601 10:57:18.694343   16816 out_reason.go:110] Suggestion: 
	
	    1. Click on "Docker for Desktop" menu icon
	    2. Click "Preferences"
	    3. Click "Resources"
	    4. Increase "CPUs" slider bar to 2 or higher
	    5. Click "Apply & Restart"
	W0601 10:57:18.694378   16816 out_reason.go:110] Documentation: https://docs.docker.com/docker-for-mac/#resources
	I0601 10:57:18.715335   16816 out.go:169] 
	I0601 10:57:18.736552   16816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 10:57:18.866369   16816 info.go:265] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SB
OM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0601 10:57:18.888046   16816 out.go:272] docker is currently using the  storage driver, consider switching to overlay2 for better performance
	I0601 10:57:18.888119   16816 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 10:57:18.935928   16816 out.go:169] 
	W0601 10:57:18.956796   16816 out_reason.go:110] Docker Desktop only has 0MiB available, less than the required 1800MiB for Kubernetes
	W0601 10:57:18.956930   16816 out_reason.go:110] Suggestion: 
	
	    1. Click on "Docker for Desktop" menu icon
	    2. Click "Preferences"
	    3. Click "Resources"
	    4. Increase "Memory" slider bar to 2.25 GB or higher
	    5. Click "Apply & Restart"
	W0601 10:57:18.956962   16816 out_reason.go:110] Documentation: https://docs.docker.com/docker-for-mac/#resources
	I0601 10:57:18.977905   16816 out.go:169] 
	I0601 10:57:19.019789   16816 out.go:169] 
	W0601 10:57:19.041000   16816 out_reason.go:110] docker only has 0MiB available, less than the required 1800MiB for Kubernetes
	I0601 10:57:19.061865   16816 out.go:169] 
	I0601 10:57:19.082727   16816 start_flags.go:373] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0601 10:57:19.082877   16816 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0601 10:57:19.103917   16816 out.go:169] Using Docker Desktop driver with the root privilege
	I0601 10:57:19.124911   16816 cni.go:95] Creating CNI manager for ""
	I0601 10:57:19.124945   16816 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0601 10:57:19.124958   16816 start_flags.go:306] config:
	{Name:download-only-20220601105717-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220601105717-16804 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 10:57:19.145701   16816 out.go:97] Starting control plane node download-only-20220601105717-16804 in cluster download-only-20220601105717-16804
	I0601 10:57:19.145729   16816 cache.go:120] Beginning downloading kic base image for docker with docker
	I0601 10:57:19.166830   16816 out.go:97] Pulling base image ...
	I0601 10:57:19.166885   16816 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 10:57:19.167006   16816 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local docker daemon
	I0601 10:57:19.167145   16816 cache.go:107] acquiring lock: {Name:mk3f2d0f507e29cac613426c429959a8e7117fcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 10:57:19.167151   16816 cache.go:107] acquiring lock: {Name:mkf9821d9d2461b5beffc2169318b3f11330978f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 10:57:19.167266   16816 cache.go:107] acquiring lock: {Name:mk20625b3b42652fa2b97770f3cffe50031cfe8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 10:57:19.167294   16816 cache.go:107] acquiring lock: {Name:mk39d38dd00db617aa0fac51208019e97a283e0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 10:57:19.168739   16816 cache.go:107] acquiring lock: {Name:mkf77daf4635f46d512836ff3b7910780092e3b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 10:57:19.168787   16816 cache.go:107] acquiring lock: {Name:mkec6aae3d95d13043c632ee1d74530c3258c7ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 10:57:19.168725   16816 cache.go:107] acquiring lock: {Name:mk38b9b3fb1173d7556dc7022fa0f4b8f2a783b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 10:57:19.168812   16816 cache.go:107] acquiring lock: {Name:mke18255e6c02acfc109a62b4302690937c98745 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 10:57:19.169821   16816 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.16.0
	I0601 10:57:19.169861   16816 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.16.0
	I0601 10:57:19.169872   16816 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0601 10:57:19.169940   16816 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.16.0
	I0601 10:57:19.169997   16816 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.2
	I0601 10:57:19.170012   16816 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/download-only-20220601105717-16804/config.json ...
	I0601 10:57:19.170015   16816 image.go:134] retrieving image: k8s.gcr.io/etcd:3.3.15-0
	I0601 10:57:19.170061   16816 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/download-only-20220601105717-16804/config.json: {Name:mk90eb991e0f3260a505b54067854f71a24a583e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 10:57:19.170109   16816 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 10:57:19.170139   16816 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.16.0
	I0601 10:57:19.170530   16816 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0601 10:57:19.170967   16816 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/linux/amd64/v1.16.0/kubectl
	I0601 10:57:19.170966   16816 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/linux/amd64/v1.16.0/kubeadm
	I0601 10:57:19.170974   16816 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/linux/amd64/v1.16.0/kubelet
	I0601 10:57:19.179346   16816 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.16.0: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0601 10:57:19.180645   16816 image.go:180] daemon lookup for k8s.gcr.io/pause:3.1: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0601 10:57:19.180938   16816 image.go:180] daemon lookup for k8s.gcr.io/etcd:3.3.15-0: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0601 10:57:19.181098   16816 image.go:180] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0601 10:57:19.181877   16816 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.16.0: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0601 10:57:19.181944   16816 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.16.0: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0601 10:57:19.181966   16816 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.16.0: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0601 10:57:19.182117   16816 image.go:180] daemon lookup for k8s.gcr.io/coredns:1.6.2: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0601 10:57:19.239822   16816 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 10:57:19.240021   16816 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a in local cache directory
	I0601 10:57:19.240149   16816 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a to local cache
	I0601 10:57:19.661575   16816 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0601 10:57:19.664622   16816 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0
	I0601 10:57:19.671977   16816 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0
	I0601 10:57:19.689760   16816 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0
	I0601 10:57:19.692876   16816 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0
	I0601 10:57:19.707865   16816 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0
	I0601 10:57:19.740535   16816 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 exists
	I0601 10:57:19.740552   16816 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1" took 573.310094ms
	I0601 10:57:19.740562   16816 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 succeeded
	I0601 10:57:19.760029   16816 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2
	I0601 10:57:19.839101   16816 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0601 10:57:20.208501   16816 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0601 10:57:20.208527   16816 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.041368659s
	I0601 10:57:20.208542   16816 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0601 10:57:20.254168   16816 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2 exists
	I0601 10:57:20.254193   16816 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.2" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2" took 1.086874313s
	I0601 10:57:20.254225   16816 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2 succeeded
	I0601 10:57:20.271517   16816 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	I0601 10:57:20.582544   16816 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0 exists
	I0601 10:57:20.582561   16816 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0" took 1.415392079s
	I0601 10:57:20.582570   16816 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0 succeeded
	I0601 10:57:20.730347   16816 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0 exists
	I0601 10:57:20.730365   16816 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0" took 1.563026471s
	I0601 10:57:20.730376   16816 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0 succeeded
	I0601 10:57:20.808840   16816 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0 exists
	I0601 10:57:20.808859   16816 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0" took 1.641523404s
	I0601 10:57:20.808868   16816 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0 succeeded
	I0601 10:57:20.819424   16816 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0 exists
	I0601 10:57:20.819440   16816 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0" took 1.652173058s
	I0601 10:57:20.819454   16816 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0 succeeded
	I0601 10:57:21.029759   16816 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0 exists
	I0601 10:57:21.029776   16816 cache.go:96] cache image "k8s.gcr.io/etcd:3.3.15-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0" took 1.862430019s
	I0601 10:57:21.029785   16816 cache.go:80] save to tar file k8s.gcr.io/etcd:3.3.15-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0 succeeded
	I0601 10:57:21.029796   16816 cache.go:87] Successfully saved all images to host disk.
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220601105717-16804"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/json-events (3.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220601105717-16804 --force --alsologtostderr --kubernetes-version=v1.23.6 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220601105717-16804 --force --alsologtostderr --kubernetes-version=v1.23.6 --container-runtime=docker --driver=docker : (3.039225703s)
--- PASS: TestDownloadOnly/v1.23.6/json-events (3.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/kubectl
--- PASS: TestDownloadOnly/v1.23.6/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220601105717-16804
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220601105717-16804: exit status 85 (293.735413ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 10:57:26
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.18.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 10:57:26.837718   16866 out.go:296] Setting OutFile to fd 1 ...
	I0601 10:57:26.837897   16866 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:57:26.837903   16866 out.go:309] Setting ErrFile to fd 2...
	I0601 10:57:26.837907   16866 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:57:26.838039   16866 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	W0601 10:57:26.838153   16866 root.go:300] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/config/config.json: no such file or directory
	I0601 10:57:26.838318   16866 out.go:303] Setting JSON to true
	I0601 10:57:26.854935   16866 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":5216,"bootTime":1654101030,"procs":357,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 10:57:26.855093   16866 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 10:57:26.877157   16866 out.go:97] [download-only-20220601105717-16804] minikube v1.26.0-beta.1 on Darwin 12.4
	W0601 10:57:26.877222   16866 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball: no such file or directory
	I0601 10:57:26.877263   16866 notify.go:193] Checking for updates...
	I0601 10:57:26.898682   16866 out.go:169] MINIKUBE_LOCATION=14079
	I0601 10:57:26.919884   16866 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 10:57:26.940896   16866 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 10:57:26.961721   16866 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 10:57:27.003780   16866 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	W0601 10:57:27.045768   16866 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0601 10:57:27.046110   16866 config.go:178] Loaded profile config "download-only-20220601105717-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0601 10:57:27.046153   16866 start.go:714] api.Load failed for download-only-20220601105717-16804: filestore "download-only-20220601105717-16804": Docker machine "download-only-20220601105717-16804" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0601 10:57:27.046193   16866 driver.go:358] Setting default libvirt URI to qemu:///system
	W0601 10:57:27.046210   16866 start.go:714] api.Load failed for download-only-20220601105717-16804: filestore "download-only-20220601105717-16804": Docker machine "download-only-20220601105717-16804" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	W0601 10:57:27.107958   16866 docker.go:113] docker version returned error: exit status 1
	I0601 10:57:27.128818   16866 out.go:97] Using the docker driver based on existing profile
	I0601 10:57:27.128832   16866 start.go:284] selected driver: docker
	I0601 10:57:27.128837   16866 start.go:806] validating driver "docker" against &{Name:download-only-20220601105717-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220601105717-16804 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 10:57:27.129034   16866 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 10:57:27.257132   16866 info.go:265] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SB
OM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 10:57:27.278891   16866 out.go:169] - Ensure your docker daemon has access to enough CPU/memory resources.
	I0601 10:57:27.299665   16866 out.go:169] - Docs https://docs.docker.com/docker-for-mac/#resources
	I0601 10:57:27.362583   16866 out.go:169] 
	W0601 10:57:27.383722   16866 out_reason.go:110] Requested cpu count 2 is greater than the available cpus of 0
	I0601 10:57:27.404408   16866 out.go:169] 
	I0601 10:57:27.446632   16866 out.go:169] 
	W0601 10:57:27.467417   16866 out_reason.go:110] Docker Desktop has less than 2 CPUs configured, but Kubernetes requires at least 2 to be available
	W0601 10:57:27.467490   16866 out_reason.go:110] Suggestion: 
	
	    1. Click on "Docker for Desktop" menu icon
	    2. Click "Preferences"
	    3. Click "Resources"
	    4. Increase "CPUs" slider bar to 2 or higher
	    5. Click "Apply & Restart"
	W0601 10:57:27.467522   16866 out_reason.go:110] Documentation: https://docs.docker.com/docker-for-mac/#resources
	I0601 10:57:27.488548   16866 out.go:169] 
	I0601 10:57:27.509711   16866 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 10:57:27.633038   16866 info.go:265] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SB
OM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0601 10:57:27.654649   16866 out.go:272] docker is currently using the  storage driver, consider switching to overlay2 for better performance
	I0601 10:57:27.675512   16866 out.go:169] - Ensure your docker daemon has access to enough CPU/memory resources.
	I0601 10:57:27.696553   16866 out.go:169] - Docs https://docs.docker.com/docker-for-mac/#resources
	I0601 10:57:27.738510   16866 out.go:169] 
	W0601 10:57:27.759629   16866 out_reason.go:110] Requested cpu count 2 is greater than the available cpus of 0
	I0601 10:57:27.780327   16866 out.go:169] 
	I0601 10:57:27.822532   16866 out.go:169] 
	W0601 10:57:27.843369   16866 out_reason.go:110] Docker Desktop has less than 2 CPUs configured, but Kubernetes requires at least 2 to be available
	W0601 10:57:27.843523   16866 out_reason.go:110] Suggestion: 
	
	    1. Click on "Docker for Desktop" menu icon
	    2. Click "Preferences"
	    3. Click "Resources"
	    4. Increase "CPUs" slider bar to 2 or higher
	    5. Click "Apply & Restart"
	W0601 10:57:27.843568   16866 out_reason.go:110] Documentation: https://docs.docker.com/docker-for-mac/#resources
	I0601 10:57:27.864532   16866 out.go:169] 
	I0601 10:57:27.908332   16866 out.go:169] 
	W0601 10:57:27.929687   16866 out_reason.go:110] Docker Desktop only has 0MiB available, less than the required 1800MiB for Kubernetes
	W0601 10:57:27.929836   16866 out_reason.go:110] Suggestion: 
	
	    1. Click on "Docker for Desktop" menu icon
	    2. Click "Preferences"
	    3. Click "Resources"
	    4. Increase "Memory" slider bar to 2.25 GB or higher
	    5. Click "Apply & Restart"
	W0601 10:57:27.929939   16866 out_reason.go:110] Documentation: https://docs.docker.com/docker-for-mac/#resources
	I0601 10:57:27.950554   16866 out.go:169] 
	I0601 10:57:27.992548   16866 out.go:169] 
	W0601 10:57:28.013588   16866 out_reason.go:110] docker only has 0MiB available, less than the required 1800MiB for Kubernetes
	I0601 10:57:28.034306   16866 out.go:169] 
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220601105717-16804"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.6/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.76s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.76s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.47s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-20220601105717-16804
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.47s)

                                                
                                    
x
+
TestBinaryMirror (5.19s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-20220601105734-16804 --alsologtostderr --binary-mirror http://127.0.0.1:56291 --driver=docker 
aaa_download_only_test.go:310: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-20220601105734-16804 --alsologtostderr --binary-mirror http://127.0.0.1:56291 --driver=docker : (4.511857414s)
helpers_test.go:175: Cleaning up "binary-mirror-20220601105734-16804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-20220601105734-16804
--- PASS: TestBinaryMirror (5.19s)

                                                
                                    
x
+
TestOffline (44.42s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-20220601113004-16804 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-20220601113004-16804 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (41.285395914s)
helpers_test.go:175: Cleaning up "offline-docker-20220601113004-16804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-20220601113004-16804
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-20220601113004-16804: (3.135864514s)
--- PASS: TestOffline (44.42s)

                                                
                                    
x
+
TestAddons/Setup (87.85s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:75: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-20220601105739-16804 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:75: (dbg) Done: out/minikube-darwin-amd64 start -p addons-20220601105739-16804 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (1m27.852793036s)
--- PASS: TestAddons/Setup (87.85s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.77s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:357: metrics-server stabilized in 2.921871ms
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:342: "metrics-server-bd6f4dd56-c6mk8" [6928cd27-5209-4157-b546-006f37914410] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00903577s
addons_test.go:365: (dbg) Run:  kubectl --context addons-20220601105739-16804 top pods -n kube-system
addons_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220601105739-16804 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.77s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.22s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:406: tiller-deploy stabilized in 14.135614ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:342: "tiller-deploy-6d67d5465d-xwmzw" [0a9fa65b-13ba-491c-a769-95b72fa653cc] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.011881436s
addons_test.go:423: (dbg) Run:  kubectl --context addons-20220601105739-16804 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) Done: kubectl --context addons-20220601105739-16804 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.699865305s)
addons_test.go:440: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220601105739-16804 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.22s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.95s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:511: csi-hostpath-driver pods stabilized in 5.390446ms
addons_test.go:514: (dbg) Run:  kubectl --context addons-20220601105739-16804 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:519: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220601105739-16804 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:524: (dbg) Run:  kubectl --context addons-20220601105739-16804 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:529: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [b668dc3b-aba3-4440-861c-9ef4e1673a2f] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [b668dc3b-aba3-4440-861c-9ef4e1673a2f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [b668dc3b-aba3-4440-861c-9ef4e1673a2f] Running
addons_test.go:529: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 21.010300379s
addons_test.go:534: (dbg) Run:  kubectl --context addons-20220601105739-16804 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:539: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220601105739-16804 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220601105739-16804 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:544: (dbg) Run:  kubectl --context addons-20220601105739-16804 delete pod task-pv-pod
addons_test.go:550: (dbg) Run:  kubectl --context addons-20220601105739-16804 delete pvc hpvc
addons_test.go:556: (dbg) Run:  kubectl --context addons-20220601105739-16804 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:561: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220601105739-16804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:566: (dbg) Run:  kubectl --context addons-20220601105739-16804 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [69a23e77-4cf7-4dca-beb0-414ae86fea51] Pending
helpers_test.go:342: "task-pv-pod-restore" [69a23e77-4cf7-4dca-beb0-414ae86fea51] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:342: "task-pv-pod-restore" [69a23e77-4cf7-4dca-beb0-414ae86fea51] Running
addons_test.go:571: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 12.013786877s
addons_test.go:576: (dbg) Run:  kubectl --context addons-20220601105739-16804 delete pod task-pv-pod-restore
addons_test.go:580: (dbg) Run:  kubectl --context addons-20220601105739-16804 delete pvc hpvc-restore
addons_test.go:584: (dbg) Run:  kubectl --context addons-20220601105739-16804 delete volumesnapshot new-snapshot-demo
addons_test.go:588: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220601105739-16804 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:588: (dbg) Done: out/minikube-darwin-amd64 -p addons-20220601105739-16804 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.8353134s)
addons_test.go:592: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220601105739-16804 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (44.95s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (14.73s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:603: (dbg) Run:  kubectl --context addons-20220601105739-16804 create -f testdata/busybox.yaml
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [c9b87e87-a89f-4bd9-b347-7830e2e897c1] Pending
helpers_test.go:342: "busybox" [c9b87e87-a89f-4bd9-b347-7830e2e897c1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [c9b87e87-a89f-4bd9-b347-7830e2e897c1] Running
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 8.007444138s
addons_test.go:615: (dbg) Run:  kubectl --context addons-20220601105739-16804 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:628: (dbg) Run:  kubectl --context addons-20220601105739-16804 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:652: (dbg) Run:  kubectl --context addons-20220601105739-16804 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:665: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220601105739-16804 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:665: (dbg) Done: out/minikube-darwin-amd64 -p addons-20220601105739-16804 addons disable gcp-auth --alsologtostderr -v=1: (5.862482573s)
--- PASS: TestAddons/serial/GCPAuth (14.73s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.96s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-20220601105739-16804
addons_test.go:132: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-20220601105739-16804: (12.574999995s)
addons_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-20220601105739-16804
addons_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-20220601105739-16804
--- PASS: TestAddons/StoppedEnableDisable (12.96s)

                                                
                                    
x
+
TestCertOptions (28.73s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-20220601113126-16804 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-20220601113126-16804 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (25.05497003s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-20220601113126-16804 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-20220601113126-16804 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-20220601113126-16804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-20220601113126-16804
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-20220601113126-16804: (2.731736849s)
--- PASS: TestCertOptions (28.73s)

                                                
                                    
x
+
TestCertExpiration (418.27s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20220601113122-16804 --memory=2048 --cert-expiration=3m --driver=docker 

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-20220601113122-16804 --memory=2048 --cert-expiration=3m --driver=docker : (3m48.977628985s)
E0601 11:35:19.230135   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601112852-16804/client.crt: no such file or directory
E0601 11:36:00.193605   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601112852-16804/client.crt: no such file or directory
E0601 11:37:11.001897   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory
E0601 11:37:22.116080   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601112852-16804/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20220601113122-16804 --memory=2048 --cert-expiration=8760h --driver=docker 
E0601 11:38:14.658646   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-20220601113122-16804 --memory=2048 --cert-expiration=8760h --driver=docker : (6.076268721s)
helpers_test.go:175: Cleaning up "cert-expiration-20220601113122-16804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-20220601113122-16804
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-20220601113122-16804: (3.211719057s)
--- PASS: TestCertExpiration (418.27s)

                                                
                                    
x
+
TestDockerFlags (29.39s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-20220601113057-16804 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-20220601113057-16804 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (24.681292118s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20220601113057-16804 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20220601113057-16804 ssh "sudo systemctl show docker --property=ExecStart --no-pager"

                                                
                                                
=== CONT  TestDockerFlags
helpers_test.go:175: Cleaning up "docker-flags-20220601113057-16804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-20220601113057-16804
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-20220601113057-16804: (3.456530542s)
--- PASS: TestDockerFlags (29.39s)

                                                
                                    
x
+
TestForceSystemdFlag (33.59s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-20220601113049-16804 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-20220601113049-16804 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (29.911061552s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-20220601113049-16804 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-20220601113049-16804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-20220601113049-16804

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-20220601113049-16804: (3.120938607s)
--- PASS: TestForceSystemdFlag (33.59s)

                                                
                                    
x
+
TestForceSystemdEnv (30.25s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-20220601113027-16804 --memory=2048 --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-20220601113027-16804 --memory=2048 --alsologtostderr -v=5 --driver=docker : (26.273582705s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-20220601113027-16804 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-20220601113027-16804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-20220601113027-16804
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-20220601113027-16804: (3.398161229s)
--- PASS: TestForceSystemdEnv (30.25s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (6.28s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (6.28s)

                                                
                                    
x
+
TestErrorSpam/setup (25.55s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-20220601110042-16804 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601110042-16804 --driver=docker 
error_spam_test.go:78: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-20220601110042-16804 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601110042-16804 --driver=docker : (25.547537166s)
--- PASS: TestErrorSpam/setup (25.55s)

                                                
                                    
x
+
TestErrorSpam/start (2.18s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220601110042-16804 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601110042-16804 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220601110042-16804 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601110042-16804 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220601110042-16804 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601110042-16804 start --dry-run
--- PASS: TestErrorSpam/start (2.18s)

                                                
                                    
x
+
TestErrorSpam/status (1.39s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220601110042-16804 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601110042-16804 status
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220601110042-16804 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601110042-16804 status
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220601110042-16804 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601110042-16804 status
--- PASS: TestErrorSpam/status (1.39s)

                                                
                                    
x
+
TestErrorSpam/pause (1.94s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220601110042-16804 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601110042-16804 pause
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220601110042-16804 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601110042-16804 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220601110042-16804 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601110042-16804 pause
--- PASS: TestErrorSpam/pause (1.94s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.11s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220601110042-16804 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601110042-16804 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220601110042-16804 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601110042-16804 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220601110042-16804 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601110042-16804 unpause
--- PASS: TestErrorSpam/unpause (2.11s)

                                                
                                    
x
+
TestErrorSpam/stop (13.26s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220601110042-16804 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601110042-16804 stop
error_spam_test.go:156: (dbg) Done: out/minikube-darwin-amd64 -p nospam-20220601110042-16804 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601110042-16804 stop: (12.58195757s)
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220601110042-16804 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601110042-16804 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220601110042-16804 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220601110042-16804 stop
--- PASS: TestErrorSpam/stop (13.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1781: local sync path: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/test/nested/copy/16804/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (40.91s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2160: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220601110131-16804 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2160: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220601110131-16804 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (40.906013259s)
--- PASS: TestFunctional/serial/StartWithProxy (40.91s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.48s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:651: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220601110131-16804 --alsologtostderr -v=8
functional_test.go:651: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220601110131-16804 --alsologtostderr -v=8: (6.474396243s)
functional_test.go:655: soft start took 6.476836951s for "functional-20220601110131-16804" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.48s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:673: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:688: (dbg) Run:  kubectl --context functional-20220601110131-16804 get po -A
functional_test.go:688: (dbg) Done: kubectl --context functional-20220601110131-16804 get po -A: (1.486216493s)
--- PASS: TestFunctional/serial/KubectlGetPods (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1041: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 cache add k8s.gcr.io/pause:3.1
functional_test.go:1041: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601110131-16804 cache add k8s.gcr.io/pause:3.1: (1.302423591s)
functional_test.go:1041: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 cache add k8s.gcr.io/pause:3.3
functional_test.go:1041: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601110131-16804 cache add k8s.gcr.io/pause:3.3: (1.5941725s)
functional_test.go:1041: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 cache add k8s.gcr.io/pause:latest
functional_test.go:1041: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601110131-16804 cache add k8s.gcr.io/pause:latest: (1.455635604s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.88s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1069: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220601110131-16804 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialCacheCmdcacheadd_local802661891/001
functional_test.go:1081: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 cache add minikube-local-cache-test:functional-20220601110131-16804
functional_test.go:1081: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601110131-16804 cache add minikube-local-cache-test:functional-20220601110131-16804: (1.349987413s)
functional_test.go:1086: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 cache delete minikube-local-cache-test:functional-20220601110131-16804
functional_test.go:1075: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220601110131-16804
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.88s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1116: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1139: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (438.976475ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1150: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 cache reload
functional_test.go:1150: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601110131-16804 cache reload: (1.031798045s)
functional_test.go:1155: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1164: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1164: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:708: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 kubectl -- --context functional-20220601110131-16804 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.51s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:733: (dbg) Run:  out/kubectl --context functional-20220601110131-16804 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.66s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (30.31s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:749: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220601110131-16804 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:749: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220601110131-16804 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (30.310934274s)
functional_test.go:753: restart took 30.311121959s for "functional-20220601110131-16804" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (30.31s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:802: (dbg) Run:  kubectl --context functional-20220601110131-16804 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:817: etcd phase: Running
functional_test.go:827: etcd status: Ready
functional_test.go:817: kube-apiserver phase: Running
functional_test.go:827: kube-apiserver status: Ready
functional_test.go:817: kube-controller-manager phase: Running
functional_test.go:827: kube-controller-manager status: Ready
functional_test.go:817: kube-scheduler phase: Running
functional_test.go:827: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.23s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1228: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 logs
functional_test.go:1228: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601110131-16804 logs: (3.228897637s)
--- PASS: TestFunctional/serial/LogsCmd (3.23s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1242: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd2306131995/001/logs.txt
functional_test.go:1242: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601110131-16804 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd2306131995/001/logs.txt: (3.33267392s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.33s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220601110131-16804 config get cpus: exit status 14 (55.2279ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 config set cpus 2
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 config get cpus
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 config unset cpus
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220601110131-16804 config get cpus: exit status 14 (54.627603ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:897: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220601110131-16804 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:902: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220601110131-16804 --alsologtostderr -v=1] ...

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
helpers_test.go:506: unable to kill pid 18724: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.69s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220601110131-16804 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20220601110131-16804 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (733.840006ms)

                                                
                                                
-- stdout --
	* [functional-20220601110131-16804] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:04:12.435186   18622 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:04:12.435384   18622 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:04:12.435390   18622 out.go:309] Setting ErrFile to fd 2...
	I0601 11:04:12.435394   18622 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:04:12.435504   18622 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:04:12.435776   18622 out.go:303] Setting JSON to false
	I0601 11:04:12.452287   18622 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":5622,"bootTime":1654101030,"procs":357,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 11:04:12.452411   18622 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:04:12.474333   18622 out.go:177] * [functional-20220601110131-16804] minikube v1.26.0-beta.1 on Darwin 12.4
	I0601 11:04:12.495523   18622 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:04:12.538042   18622 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:04:12.617996   18622 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 11:04:12.660145   18622 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:04:12.702173   18622 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:04:12.723736   18622 config.go:178] Loaded profile config "functional-20220601110131-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:04:12.724235   18622 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:04:12.807903   18622 docker.go:137] docker version: linux-20.10.14
	I0601 11:04:12.808036   18622 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:04:12.961726   18622 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 18:04:12.8853222 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:04:12.983563   18622 out.go:177] * Using the docker driver based on existing profile
	I0601 11:04:13.004386   18622 start.go:284] selected driver: docker
	I0601 11:04:13.004403   18622 start.go:806] validating driver "docker" against &{Name:functional-20220601110131-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220601110131-16804 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false reg
istry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:04:13.004553   18622 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:04:13.028398   18622 out.go:177] 
	W0601 11:04:13.050336   18622 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0601 11:04:13.071587   18622 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:983: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220601110131-16804 --dry-run --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:983: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220601110131-16804 --dry-run --alsologtostderr -v=1 --driver=docker : (1.151941511s)
--- PASS: TestFunctional/parallel/DryRun (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220601110131-16804 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1012: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20220601110131-16804 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (613.065736ms)

                                                
                                                
-- stdout --
	* [functional-20220601110131-16804] minikube v1.26.0-beta.1 sur Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:04:02.884432   18435 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:04:02.884577   18435 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:04:02.884581   18435 out.go:309] Setting ErrFile to fd 2...
	I0601 11:04:02.884585   18435 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:04:02.884701   18435 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:04:02.884960   18435 out.go:303] Setting JSON to false
	I0601 11:04:02.901271   18435 start.go:115] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":5612,"bootTime":1654101030,"procs":353,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0601 11:04:02.901357   18435 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0601 11:04:02.922169   18435 out.go:177] * [functional-20220601110131-16804] minikube v1.26.0-beta.1 sur Darwin 12.4
	I0601 11:04:02.966628   18435 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:04:02.988214   18435 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:04:03.010467   18435 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0601 11:04:03.032125   18435 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:04:03.053393   18435 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:04:03.076131   18435 config.go:178] Loaded profile config "functional-20220601110131-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:04:03.076765   18435 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:04:03.149053   18435 docker.go:137] docker version: linux-20.10.14
	I0601 11:04:03.149177   18435 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0601 11:04:03.273885   18435 info.go:265] docker info: {ID:P6BO:T6BM:ZDBM:FULV:ZVLM:RETQ:MOAV:VPDW:3RE3:6OCX:FEVE:AUGK Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-01 18:04:03.20722092 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0601 11:04:03.317373   18435 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0601 11:04:03.338539   18435 start.go:284] selected driver: docker
	I0601 11:04:03.338608   18435 start.go:806] validating driver "docker" against &{Name:functional-20220601110131-16804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220601110131-16804 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false reg
istry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:04:03.338751   18435 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:04:03.363864   18435 out.go:177] 
	W0601 11:04:03.385749   18435 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0601 11:04:03.407547   18435 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:852: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:864: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (13.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Run:  kubectl --context functional-20220601110131-16804 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1438: (dbg) Run:  kubectl --context functional-20220601110131-16804 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54fbb85-9lp6d" [6196cb8b-38d3-4c3c-bd2b-a04e93593c59] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-54fbb85-9lp6d" [6196cb8b-38d3-4c3c-bd2b-a04e93593c59] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 6.010720026s
functional_test.go:1448: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1448: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601110131-16804 service list: (1.19306814s)
functional_test.go:1462: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1462: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601110131-16804 service --namespace=default --https --url hello-node: (2.026080567s)
functional_test.go:1475: found endpoint: https://127.0.0.1:59036
functional_test.go:1490: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1490: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601110131-16804 service hello-node --url --format={{.IP}}: (2.026603022s)
functional_test.go:1504: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1504: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601110131-16804 service hello-node --url: (2.025422523s)
functional_test.go:1510: found endpoint for hello-node: http://127.0.0.1:59117
--- PASS: TestFunctional/parallel/ServiceCmd (13.39s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 addons list
functional_test.go:1631: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [35e7d250-e89b-4cc6-9ca1-4498a9c5e56b] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009176765s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20220601110131-16804 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20220601110131-16804 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220601110131-16804 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220601110131-16804 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [a084aef9-6776-429e-8e3c-04bb82852fbb] Pending
helpers_test.go:342: "sp-pod" [a084aef9-6776-429e-8e3c-04bb82852fbb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [a084aef9-6776-429e-8e3c-04bb82852fbb] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.007929953s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20220601110131-16804 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20220601110131-16804 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220601110131-16804 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [df0365c7-5a29-4423-8edb-e72c4917690e] Pending
helpers_test.go:342: "sp-pod" [df0365c7-5a29-4423-8edb-e72c4917690e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [df0365c7-5a29-4423-8edb-e72c4917690e] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.011257959s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20220601110131-16804 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.30s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1671: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh -n functional-20220601110131-16804 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 cp functional-20220601110131-16804:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelCpCmd2504017188/001/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh -n functional-20220601110131-16804 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1719: (dbg) Run:  kubectl --context functional-20220601110131-16804 replace --force -f testdata/mysql.yaml
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-b87c45988-qk9dp" [89ad9546-f836-4edb-a3b5-7d26483f9657] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-b87c45988-qk9dp" [89ad9546-f836-4edb-a3b5-7d26483f9657] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.017232548s
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220601110131-16804 exec mysql-b87c45988-qk9dp -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220601110131-16804 exec mysql-b87c45988-qk9dp -- mysql -ppassword -e "show databases;": exit status 1 (171.984504ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220601110131-16804 exec mysql-b87c45988-qk9dp -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220601110131-16804 exec mysql-b87c45988-qk9dp -- mysql -ppassword -e "show databases;": exit status 1 (120.263591ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220601110131-16804 exec mysql-b87c45988-qk9dp -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220601110131-16804 exec mysql-b87c45988-qk9dp -- mysql -ppassword -e "show databases;": exit status 1 (118.420929ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220601110131-16804 exec mysql-b87c45988-qk9dp -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.19s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1855: Checking for existence of /etc/test/nested/copy/16804/hosts within VM
functional_test.go:1857: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh "sudo cat /etc/test/nested/copy/16804/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1862: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/16804.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh "sudo cat /etc/ssl/certs/16804.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /usr/share/ca-certificates/16804.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh "sudo cat /usr/share/ca-certificates/16804.pem"
functional_test.go:1898: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1899: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1925: Checking for existence of /etc/ssl/certs/168042.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh "sudo cat /etc/ssl/certs/168042.pem"
functional_test.go:1925: Checking for existence of /usr/share/ca-certificates/168042.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh "sudo cat /usr/share/ca-certificates/168042.pem"
functional_test.go:1925: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.86s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220601110131-16804 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh "sudo systemctl is-active crio": exit status 1 (451.561526ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2182: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 image ls --format short
E0601 11:04:18.078752   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220601110131-16804 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.23.6
k8s.gcr.io/kube-proxy:v1.23.6
k8s.gcr.io/kube-controller-manager:v1.23.6
k8s.gcr.io/kube-apiserver:v1.23.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220601110131-16804
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220601110131-16804
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 image ls --format table

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220601110131-16804 image ls --format table:
|---------------------------------------------|---------------------------------|---------------|--------|
|                    Image                    |               Tag               |   Image ID    |  Size  |
|---------------------------------------------|---------------------------------|---------------|--------|
| k8s.gcr.io/kube-apiserver                   | v1.23.6                         | 8fa62c12256df | 135MB  |
| k8s.gcr.io/kube-scheduler                   | v1.23.6                         | 595f327f224a4 | 53.5MB |
| k8s.gcr.io/kube-proxy                       | v1.23.6                         | 4c03754524064 | 112MB  |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                          | a4ca41631cc7a | 46.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                              | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-20220601110131-16804 | ffd4cfbbe753e | 32.9MB |
| docker.io/localhost/my-image                | functional-20220601110131-16804 | 6caf73582819b | 1.24MB |
| docker.io/library/nginx                     | alpine                          | b1c3acb288825 | 23.4MB |
| k8s.gcr.io/pause                            | 3.3                             | 0184c1613d929 | 683kB  |
| k8s.gcr.io/pause                            | latest                          | 350b164e7ae1d | 240kB  |
| gcr.io/k8s-minikube/busybox                 | latest                          | beae173ccac6a | 1.24MB |
| k8s.gcr.io/etcd                             | 3.5.1-0                         | 25f8c7f3da61c | 293MB  |
| k8s.gcr.io/echoserver                       | 1.8                             | 82e4c8a736a4f | 95.4MB |
| k8s.gcr.io/kube-controller-manager          | v1.23.6                         | df7b72818ad2e | 125MB  |
| docker.io/kubernetesui/dashboard            | <none>                          | 7fff914c4a615 | 243MB  |
| docker.io/library/mysql                     | 5.7                             | 2a0961b7de03c | 462MB  |
| k8s.gcr.io/pause                            | 3.6                             | 6270bb605e12e | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                    | 56cc512116c8f | 4.4MB  |
| k8s.gcr.io/pause                            | 3.1                             | da86e6ba6ca19 | 742kB  |
| docker.io/library/minikube-local-cache-test | functional-20220601110131-16804 | a82fe60ca7bf4 | 30B    |
| docker.io/library/nginx                     | latest                          | 0e901e68141fd | 142MB  |
|---------------------------------------------|---------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 image ls --format json
2022/06/01 11:04:22 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220601110131-16804 image ls --format json:
[{"id":"2a0961b7de03c7b11afd13fec09d0d30992b6e0b4f947a4aba4273723778ed95","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"462000000"},{"id":"8fa62c12256df9d9d0c3f1cf90856e27d90f209f42271c2f19326a705342c3b6","repoDigests":[],"repoTags":["k8s.gcr.io/kube-apiserver:v1.23.6"],"size":"135000000"},{"id":"595f327f224a42213913a39d224c8aceb96c81ad3909ae13f6045f570aafe8f0","repoDigests":[],"repoTags":["k8s.gcr.io/kube-scheduler:v1.23.6"],"size":"53500000"},{"id":"4c037545240644e87d79f6b4071331f9adea6176339c98e529b4af8af00d4e47","repoDigests":[],"repoTags":["k8s.gcr.io/kube-proxy:v1.23.6"],"size":"112000000"},{"id":"df7b72818ad2e4f1f204c7ffb51239de67f49c6b22671c70354ee5d65ac37657","repoDigests":[],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.23.6"],"size":"125000000"},{"id":"7fff914c4a615552dde44bde1183cdaf1656495d54327823c37e897e6c999fe8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"243000000"},{"id":"25f8c7f3da61c2a810effe5fa779cf80ca171afb
0adf94c7cb51eb9a8546629d","repoDigests":[],"repoTags":["k8s.gcr.io/etcd:3.5.1-0"],"size":"293000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"6caf73582819b7f40904596c5d03f2912ce3fba54f07cf74f2fbac9268a03ac8","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-20220601110131-16804"],"size":"1240000"},{"id":"a82fe60ca7bf4674c050bbdebe4da5811f5aecf0ed4e31c5730f7dce543ae2a3","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220601110131-16804"],"size":"30"},{"id":"0e901e68141fd02f237cf63eb842529f8a9500636a9419e3cf4fb986b8fe3d5d","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"b1c3acb28882519cf6d3a4d7fe2b21d0ae20bde9cfd2c08a7de057f8cfccff15
","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23400000"},{"id":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":[],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"46800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"ffd4cfbbe753e62419e12
9ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220601110131-16804"],"size":"32900000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 image ls --format yaml
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220601110131-16804 image ls --format yaml:
- id: a82fe60ca7bf4674c050bbdebe4da5811f5aecf0ed4e31c5730f7dce543ae2a3
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220601110131-16804
size: "30"
- id: 8fa62c12256df9d9d0c3f1cf90856e27d90f209f42271c2f19326a705342c3b6
repoDigests: []
repoTags:
- k8s.gcr.io/kube-apiserver:v1.23.6
size: "135000000"
- id: df7b72818ad2e4f1f204c7ffb51239de67f49c6b22671c70354ee5d65ac37657
repoDigests: []
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.23.6
size: "125000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 595f327f224a42213913a39d224c8aceb96c81ad3909ae13f6045f570aafe8f0
repoDigests: []
repoTags:
- k8s.gcr.io/kube-scheduler:v1.23.6
size: "53500000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 0e901e68141fd02f237cf63eb842529f8a9500636a9419e3cf4fb986b8fe3d5d
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 2a0961b7de03c7b11afd13fec09d0d30992b6e0b4f947a4aba4273723778ed95
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "462000000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220601110131-16804
size: "32900000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: b1c3acb28882519cf6d3a4d7fe2b21d0ae20bde9cfd2c08a7de057f8cfccff15
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23400000"
- id: 4c037545240644e87d79f6b4071331f9adea6176339c98e529b4af8af00d4e47
repoDigests: []
repoTags:
- k8s.gcr.io/kube-proxy:v1.23.6
size: "112000000"
- id: 25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d
repoDigests: []
repoTags:
- k8s.gcr.io/etcd:3.5.1-0
size: "293000000"
- id: a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests: []
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "46800000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh pgrep buildkitd
functional_test.go:303: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh pgrep buildkitd: exit status 1 (493.723784ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 image build -t localhost/my-image:functional-20220601110131-16804 testdata/build
functional_test.go:310: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601110131-16804 image build -t localhost/my-image:functional-20220601110131-16804 testdata/build: (3.028600377s)
functional_test.go:315: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220601110131-16804 image build -t localhost/my-image:functional-20220601110131-16804 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 8ac60bb0eb1a
Removing intermediate container 8ac60bb0eb1a
---> af5a181ce83c
Step 3/3 : ADD content.txt /
---> 6caf73582819
Successfully built 6caf73582819
Successfully tagged localhost/my-image:functional-20220601110131-16804
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.848417159s)
functional_test.go:342: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220601110131-16804
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:491: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220601110131-16804 docker-env) && out/minikube-darwin-amd64 status -p functional-20220601110131-16804"
functional_test.go:491: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220601110131-16804 docker-env) && out/minikube-darwin-amd64 status -p functional-20220601110131-16804": (1.066531242s)
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220601110131-16804 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601110131-16804

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601110131-16804 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601110131-16804: (3.239973623s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601110131-16804

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601110131-16804 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601110131-16804: (2.180108795s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:235: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220601110131-16804
functional_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601110131-16804
functional_test.go:240: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601110131-16804 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601110131-16804: (4.24259438s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 image save gcr.io/google-containers/addon-resizer:functional-20220601110131-16804 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:375: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601110131-16804 image save gcr.io/google-containers/addon-resizer:functional-20220601110131-16804 /Users/jenkins/workspace/addon-resizer-save.tar: (2.058026442s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 image rm gcr.io/google-containers/addon-resizer:functional-20220601110131-16804
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 image load /Users/jenkins/workspace/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601110131-16804 image load /Users/jenkins/workspace/addon-resizer-save.tar: (1.571370176s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220601110131-16804
functional_test.go:419: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220601110131-16804
functional_test.go:419: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220601110131-16804 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220601110131-16804: (2.584632344s)
functional_test.go:424: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220601110131-16804
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.73s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Run:  out/minikube-darwin-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: Took "469.158781ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1319: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1324: Took "80.946356ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1361: Took "512.557983ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1369: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1374: Took "119.423109ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-20220601110131-16804 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20220601110131-16804 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [09192063-c233-41c7-8468-00d02c843443] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [09192063-c233-41c7-8468-00d02c843443] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [09192063-c233-41c7-8468-00d02c843443] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.007286465s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220601110131-16804 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-20220601110131-16804 tunnel --alsologtostderr] ...
helpers_test.go:500: unable to terminate pid 18407: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20220601110131-16804 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port2306598741/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1654106643456731000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port2306598741/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1654106643456731000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port2306598741/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1654106643456731000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port2306598741/001/test-1654106643456731000
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (467.242344ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh -- ls -la /mount-9p
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun  1 18:04 created-by-test
-rw-r--r-- 1 docker docker 24 Jun  1 18:04 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun  1 18:04 test-1654106643456731000
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh cat /mount-9p/test-1654106643456731000
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-20220601110131-16804 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [4d67991c-d2b1-47e2-85c2-2624dbefcabc] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [4d67991c-d2b1-47e2-85c2-2624dbefcabc] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0601 11:04:07.834394   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory
E0601 11:04:07.840503   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory
E0601 11:04:07.850589   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory
E0601 11:04:07.870762   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory
E0601 11:04:07.911495   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory
E0601 11:04:07.992443   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory
E0601 11:04:08.152669   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory
E0601 11:04:08.474812   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [4d67991c-d2b1-47e2-85c2-2624dbefcabc] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
E0601 11:04:09.115694   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory
E0601 11:04:10.397943   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [4d67991c-d2b1-47e2-85c2-2624dbefcabc] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.007641235s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-20220601110131-16804 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh stat /mount-9p/created-by-test

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh stat /mount-9p/created-by-pod

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220601110131-16804 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port2306598741/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (3.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20220601110131-16804 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port1144171941/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh "findmnt -T /mount-9p | grep 9p"
E0601 11:04:12.958223   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (634.267233ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh -- ls -la /mount-9p
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220601110131-16804 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port1144171941/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh "sudo umount -f /mount-9p": exit status 1 (495.455229ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-darwin-amd64 -p functional-20220601110131-16804 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220601110131-16804 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port1144171941/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (3.08s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.21s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220601110131-16804
--- PASS: TestFunctional/delete_addon-resizer_images (0.21s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220601110131-16804
--- PASS: TestFunctional/delete_my-image_image (0.08s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220601110131-16804
--- PASS: TestFunctional/delete_minikube_cached_images (0.08s)

                                                
                                    
x
+
TestJSONOutput/start/Command (39.25s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-20220601111141-16804 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-20220601111141-16804 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (39.245610447s)
--- PASS: TestJSONOutput/start/Command (39.25s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-20220601111141-16804 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-20220601111141-16804 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.43s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-20220601111141-16804 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-20220601111141-16804 --output=json --user=testUser: (12.430107623s)
--- PASS: TestJSONOutput/stop/Command (12.43s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.76s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-20220601111236-16804 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-20220601111236-16804 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (329.296049ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"aeeffe81-084c-4a53-823f-4517e8489728","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220601111236-16804] minikube v1.26.0-beta.1 on Darwin 12.4","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"571b43e2-00a9-4f56-b9df-3e6470304b12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14079"}}
	{"specversion":"1.0","id":"21e436fd-4517-47c4-bede-4c79f917b9f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig"}}
	{"specversion":"1.0","id":"4d5b972b-2931-4364-b73c-d723a3f71878","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"6325a2c8-e2f1-436d-ae77-33ff47746ec8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8167ec61-f5d8-47e0-8bd5-283732964f16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube"}}
	{"specversion":"1.0","id":"4cc7aaf5-5964-43aa-bf25-ec8bc92de522","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220601111236-16804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-20220601111236-16804
--- PASS: TestErrorJSONOutput (0.76s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (26.95s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20220601111237-16804 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20220601111237-16804 --network=: (24.123149814s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220601111237-16804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20220601111237-16804
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20220601111237-16804: (2.758790012s)
--- PASS: TestKicCustomNetwork/create_custom_network (26.95s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.53s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20220601111304-16804 --network=bridge
E0601 11:13:14.614994   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20220601111304-16804 --network=bridge: (23.904051527s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220601111304-16804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20220601111304-16804
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20220601111304-16804: (2.561892149s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.53s)

                                                
                                    
x
+
TestKicExistingNetwork (26.46s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-20220601111331-16804 --network=existing-network
E0601 11:13:42.323215   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-20220601111331-16804 --network=existing-network: (23.330682513s)
helpers_test.go:175: Cleaning up "existing-network-20220601111331-16804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-20220601111331-16804
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-20220601111331-16804: (2.721250182s)
--- PASS: TestKicExistingNetwork (26.46s)

                                                
                                    
x
+
TestKicCustomSubnet (26.85s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-20220601111357-16804 --subnet=192.168.60.0/24
E0601 11:14:07.909135   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-20220601111357-16804 --subnet=192.168.60.0/24: (24.036832777s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220601111357-16804 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-20220601111357-16804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-20220601111357-16804
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-20220601111357-16804: (2.745919892s)
--- PASS: TestKicCustomSubnet (26.85s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (56.32s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-20220601111424-16804 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-20220601111424-16804 --driver=docker : (24.653878529s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-20220601111424-16804 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-20220601111424-16804 --driver=docker : (23.966515271s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-20220601111424-16804
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-20220601111424-16804
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-20220601111424-16804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-20220601111424-16804
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-20220601111424-16804: (2.877668402s)
helpers_test.go:175: Cleaning up "first-20220601111424-16804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-20220601111424-16804
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-20220601111424-16804: (2.750997694s)
--- PASS: TestMinikubeProfile (56.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.51s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-20220601111520-16804 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-20220601111520-16804 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.507859537s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-20220601111520-16804 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.22s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20220601111520-16804 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20220601111520-16804 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.214764016s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220601111520-16804 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.44s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.45s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-20220601111520-16804 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-20220601111520-16804 --alsologtostderr -v=5: (2.442944698s)
--- PASS: TestMountStart/serial/DeleteFirst (2.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220601111520-16804 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.43s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-20220601111520-16804
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-20220601111520-16804: (1.63212742s)
--- PASS: TestMountStart/serial/Stop (1.63s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (4.93s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20220601111520-16804
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20220601111520-16804: (3.930532183s)
--- PASS: TestMountStart/serial/RestartStopped (4.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220601111520-16804 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.43s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (70.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220601111548-16804 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
multinode_test.go:83: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220601111548-16804 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m9.798334869s)
multinode_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (70.58s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220601111548-16804 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:479: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20220601111548-16804 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: (1.708317257s)
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220601111548-16804 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20220601111548-16804 -- rollout status deployment/busybox: (2.81451626s)
multinode_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220601111548-16804 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220601111548-16804 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220601111548-16804 -- exec busybox-7978565885-2p2jw -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220601111548-16804 -- exec busybox-7978565885-pdq7v -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220601111548-16804 -- exec busybox-7978565885-2p2jw -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220601111548-16804 -- exec busybox-7978565885-pdq7v -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220601111548-16804 -- exec busybox-7978565885-2p2jw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220601111548-16804 -- exec busybox-7978565885-pdq7v -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.90s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220601111548-16804 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220601111548-16804 -- exec busybox-7978565885-2p2jw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220601111548-16804 -- exec busybox-7978565885-2p2jw -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220601111548-16804 -- exec busybox-7978565885-pdq7v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220601111548-16804 -- exec busybox-7978565885-pdq7v -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (26.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20220601111548-16804 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-20220601111548-16804 -v 3 --alsologtostderr: (25.102898909s)
multinode_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220601111548-16804 status --alsologtostderr: (1.122187902s)
--- PASS: TestMultiNode/serial/AddNode (26.23s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.53s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (17.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220601111548-16804 status --output json --alsologtostderr: (1.12976663s)
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 cp testdata/cp-test.txt multinode-20220601111548-16804:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 ssh -n multinode-20220601111548-16804 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 cp multinode-20220601111548-16804:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile1865053546/001/cp-test_multinode-20220601111548-16804.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 ssh -n multinode-20220601111548-16804 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 cp multinode-20220601111548-16804:/home/docker/cp-test.txt multinode-20220601111548-16804-m02:/home/docker/cp-test_multinode-20220601111548-16804_multinode-20220601111548-16804-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 ssh -n multinode-20220601111548-16804 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 ssh -n multinode-20220601111548-16804-m02 "sudo cat /home/docker/cp-test_multinode-20220601111548-16804_multinode-20220601111548-16804-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 cp multinode-20220601111548-16804:/home/docker/cp-test.txt multinode-20220601111548-16804-m03:/home/docker/cp-test_multinode-20220601111548-16804_multinode-20220601111548-16804-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 ssh -n multinode-20220601111548-16804 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 ssh -n multinode-20220601111548-16804-m03 "sudo cat /home/docker/cp-test_multinode-20220601111548-16804_multinode-20220601111548-16804-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 cp testdata/cp-test.txt multinode-20220601111548-16804-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 ssh -n multinode-20220601111548-16804-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 cp multinode-20220601111548-16804-m02:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile1865053546/001/cp-test_multinode-20220601111548-16804-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 ssh -n multinode-20220601111548-16804-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 cp multinode-20220601111548-16804-m02:/home/docker/cp-test.txt multinode-20220601111548-16804:/home/docker/cp-test_multinode-20220601111548-16804-m02_multinode-20220601111548-16804.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 ssh -n multinode-20220601111548-16804-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 ssh -n multinode-20220601111548-16804 "sudo cat /home/docker/cp-test_multinode-20220601111548-16804-m02_multinode-20220601111548-16804.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 cp multinode-20220601111548-16804-m02:/home/docker/cp-test.txt multinode-20220601111548-16804-m03:/home/docker/cp-test_multinode-20220601111548-16804-m02_multinode-20220601111548-16804-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 ssh -n multinode-20220601111548-16804-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 ssh -n multinode-20220601111548-16804-m03 "sudo cat /home/docker/cp-test_multinode-20220601111548-16804-m02_multinode-20220601111548-16804-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 cp testdata/cp-test.txt multinode-20220601111548-16804-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 ssh -n multinode-20220601111548-16804-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 cp multinode-20220601111548-16804-m03:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile1865053546/001/cp-test_multinode-20220601111548-16804-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 ssh -n multinode-20220601111548-16804-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 cp multinode-20220601111548-16804-m03:/home/docker/cp-test.txt multinode-20220601111548-16804:/home/docker/cp-test_multinode-20220601111548-16804-m03_multinode-20220601111548-16804.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 ssh -n multinode-20220601111548-16804-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 ssh -n multinode-20220601111548-16804 "sudo cat /home/docker/cp-test_multinode-20220601111548-16804-m03_multinode-20220601111548-16804.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 cp multinode-20220601111548-16804-m03:/home/docker/cp-test.txt multinode-20220601111548-16804-m02:/home/docker/cp-test_multinode-20220601111548-16804-m03_multinode-20220601111548-16804-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 ssh -n multinode-20220601111548-16804-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 ssh -n multinode-20220601111548-16804-m02 "sudo cat /home/docker/cp-test_multinode-20220601111548-16804-m03_multinode-20220601111548-16804-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (17.03s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (14.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220601111548-16804 node stop m03: (12.512413427s)
multinode_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220601111548-16804 status: exit status 7 (860.95078ms)

                                                
                                                
-- stdout --
	multinode-20220601111548-16804
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220601111548-16804-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220601111548-16804-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220601111548-16804 status --alsologtostderr: exit status 7 (846.769998ms)

                                                
                                                
-- stdout --
	multinode-20220601111548-16804
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220601111548-16804-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220601111548-16804-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:18:03.370865   21228 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:18:03.371020   21228 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:18:03.371025   21228 out.go:309] Setting ErrFile to fd 2...
	I0601 11:18:03.371029   21228 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:18:03.371123   21228 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:18:03.371294   21228 out.go:303] Setting JSON to false
	I0601 11:18:03.371313   21228 mustload.go:65] Loading cluster: multinode-20220601111548-16804
	I0601 11:18:03.371602   21228 config.go:178] Loaded profile config "multinode-20220601111548-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:18:03.371612   21228 status.go:253] checking status of multinode-20220601111548-16804 ...
	I0601 11:18:03.371959   21228 cli_runner.go:164] Run: docker container inspect multinode-20220601111548-16804 --format={{.State.Status}}
	I0601 11:18:03.444266   21228 status.go:328] multinode-20220601111548-16804 host status = "Running" (err=<nil>)
	I0601 11:18:03.444302   21228 host.go:66] Checking if "multinode-20220601111548-16804" exists ...
	I0601 11:18:03.444570   21228 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220601111548-16804
	I0601 11:18:03.516172   21228 host.go:66] Checking if "multinode-20220601111548-16804" exists ...
	I0601 11:18:03.516413   21228 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:18:03.516489   21228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601111548-16804
	I0601 11:18:03.589653   21228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62509 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/multinode-20220601111548-16804/id_rsa Username:docker}
	I0601 11:18:03.673942   21228 ssh_runner.go:195] Run: systemctl --version
	I0601 11:18:03.678385   21228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:18:03.687637   21228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220601111548-16804
	I0601 11:18:03.759788   21228 kubeconfig.go:92] found "multinode-20220601111548-16804" server: "https://127.0.0.1:62508"
	I0601 11:18:03.759814   21228 api_server.go:165] Checking apiserver status ...
	I0601 11:18:03.759849   21228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:18:03.769716   21228 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1601/cgroup
	W0601 11:18:03.777619   21228 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1601/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:18:03.777636   21228 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:62508/healthz ...
	I0601 11:18:03.783211   21228 api_server.go:266] https://127.0.0.1:62508/healthz returned 200:
	ok
	I0601 11:18:03.783224   21228 status.go:419] multinode-20220601111548-16804 apiserver status = Running (err=<nil>)
	I0601 11:18:03.783233   21228 status.go:255] multinode-20220601111548-16804 status: &{Name:multinode-20220601111548-16804 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0601 11:18:03.783247   21228 status.go:253] checking status of multinode-20220601111548-16804-m02 ...
	I0601 11:18:03.783482   21228 cli_runner.go:164] Run: docker container inspect multinode-20220601111548-16804-m02 --format={{.State.Status}}
	I0601 11:18:03.855668   21228 status.go:328] multinode-20220601111548-16804-m02 host status = "Running" (err=<nil>)
	I0601 11:18:03.855691   21228 host.go:66] Checking if "multinode-20220601111548-16804-m02" exists ...
	I0601 11:18:03.855979   21228 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220601111548-16804-m02
	I0601 11:18:03.928144   21228 host.go:66] Checking if "multinode-20220601111548-16804-m02" exists ...
	I0601 11:18:03.928461   21228 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 11:18:03.928525   21228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220601111548-16804-m02
	I0601 11:18:04.001801   21228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62697 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/multinode-20220601111548-16804-m02/id_rsa Username:docker}
	I0601 11:18:04.084853   21228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:18:04.094395   21228 status.go:255] multinode-20220601111548-16804-m02 status: &{Name:multinode-20220601111548-16804-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0601 11:18:04.094420   21228 status.go:253] checking status of multinode-20220601111548-16804-m03 ...
	I0601 11:18:04.094673   21228 cli_runner.go:164] Run: docker container inspect multinode-20220601111548-16804-m03 --format={{.State.Status}}
	I0601 11:18:04.166669   21228 status.go:328] multinode-20220601111548-16804-m03 host status = "Stopped" (err=<nil>)
	I0601 11:18:04.166690   21228 status.go:341] host is not running, skipping remaining checks
	I0601 11:18:04.166698   21228 status.go:255] multinode-20220601111548-16804-m03 status: &{Name:multinode-20220601111548-16804-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (14.22s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (25.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 node start m03 --alsologtostderr
E0601 11:18:14.627906   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
multinode_test.go:252: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220601111548-16804 node start m03 --alsologtostderr: (24.055366069s)
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 status
multinode_test.go:259: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220601111548-16804 status: (1.125931975s)
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (25.30s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (120.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220601111548-16804
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-20220601111548-16804
multinode_test.go:288: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-20220601111548-16804: (37.238090705s)
multinode_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220601111548-16804 --wait=true -v=8 --alsologtostderr
E0601 11:19:07.914116   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220601111548-16804 --wait=true -v=8 --alsologtostderr: (1m22.960545591s)
multinode_test.go:298: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220601111548-16804
--- PASS: TestMultiNode/serial/RestartKeepsNodes (120.31s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (19.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 node delete m03
E0601 11:20:30.974923   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory
multinode_test.go:392: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220601111548-16804 node delete m03: (16.667148175s)
multinode_test.go:398: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:422: (dbg) Done: kubectl get nodes: (1.464667378s)
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (19.04s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 stop
multinode_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220601111548-16804 stop: (24.950742747s)
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220601111548-16804 status: exit status 7 (182.585859ms)

                                                
                                                
-- stdout --
	multinode-20220601111548-16804
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220601111548-16804-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220601111548-16804 status --alsologtostderr: exit status 7 (180.258129ms)

                                                
                                                
-- stdout --
	multinode-20220601111548-16804
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220601111548-16804-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:21:14.009421   21690 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:21:14.009612   21690 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:21:14.009616   21690 out.go:309] Setting ErrFile to fd 2...
	I0601 11:21:14.009620   21690 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:21:14.009713   21690 root.go:322] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:21:14.009877   21690 out.go:303] Setting JSON to false
	I0601 11:21:14.009891   21690 mustload.go:65] Loading cluster: multinode-20220601111548-16804
	I0601 11:21:14.010219   21690 config.go:178] Loaded profile config "multinode-20220601111548-16804": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0601 11:21:14.010228   21690 status.go:253] checking status of multinode-20220601111548-16804 ...
	I0601 11:21:14.010588   21690 cli_runner.go:164] Run: docker container inspect multinode-20220601111548-16804 --format={{.State.Status}}
	I0601 11:21:14.074693   21690 status.go:328] multinode-20220601111548-16804 host status = "Stopped" (err=<nil>)
	I0601 11:21:14.074747   21690 status.go:341] host is not running, skipping remaining checks
	I0601 11:21:14.074754   21690 status.go:255] multinode-20220601111548-16804 status: &{Name:multinode-20220601111548-16804 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0601 11:21:14.074796   21690 status.go:253] checking status of multinode-20220601111548-16804-m02 ...
	I0601 11:21:14.075050   21690 cli_runner.go:164] Run: docker container inspect multinode-20220601111548-16804-m02 --format={{.State.Status}}
	I0601 11:21:14.139318   21690 status.go:328] multinode-20220601111548-16804-m02 host status = "Stopped" (err=<nil>)
	I0601 11:21:14.139343   21690 status.go:341] host is not running, skipping remaining checks
	I0601 11:21:14.139350   21690 status.go:255] multinode-20220601111548-16804-m02 status: &{Name:multinode-20220601111548-16804-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.32s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (60.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220601111548-16804 --wait=true -v=8 --alsologtostderr --driver=docker 
multinode_test.go:352: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220601111548-16804 --wait=true -v=8 --alsologtostderr --driver=docker : (58.000631327s)
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220601111548-16804 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:372: (dbg) Done: kubectl get nodes: (1.504690487s)
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (60.42s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (28.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220601111548-16804
multinode_test.go:450: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220601111548-16804-m02 --driver=docker 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-20220601111548-16804-m02 --driver=docker : exit status 14 (368.231242ms)

                                                
                                                
-- stdout --
	* [multinode-20220601111548-16804-m02] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220601111548-16804-m02' is duplicated with machine name 'multinode-20220601111548-16804-m02' in profile 'multinode-20220601111548-16804'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220601111548-16804-m03 --driver=docker 
multinode_test.go:458: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220601111548-16804-m03 --driver=docker : (24.653801392s)
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20220601111548-16804
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-20220601111548-16804: exit status 80 (545.176392ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220601111548-16804
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220601111548-16804-m03 already exists in multinode-20220601111548-16804-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-20220601111548-16804-m03
multinode_test.go:470: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-20220601111548-16804-m03: (2.999790098s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (28.62s)

                                                
                                    
x
+
TestScheduledStopUnix (98.85s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-20220601112713-16804 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-20220601112713-16804 --memory=2048 --driver=docker : (24.418063362s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220601112713-16804 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220601112713-16804 -n scheduled-stop-20220601112713-16804
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220601112713-16804 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220601112713-16804 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220601112713-16804 -n scheduled-stop-20220601112713-16804
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20220601112713-16804
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220601112713-16804 --schedule 15s
E0601 11:28:14.642524   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20220601112713-16804
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-20220601112713-16804: exit status 7 (116.703179ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220601112713-16804
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220601112713-16804 -n scheduled-stop-20220601112713-16804
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220601112713-16804 -n scheduled-stop-20220601112713-16804: exit status 7 (112.27774ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-20220601112713-16804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-20220601112713-16804
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-20220601112713-16804: (2.412989755s)
--- PASS: TestScheduledStopUnix (98.85s)

                                                
                                    
x
+
TestSkaffold (58.77s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe2485945553 version
skaffold_test.go:63: skaffold version: v1.38.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-20220601112852-16804 --memory=2600 --driver=docker 
E0601 11:29:07.932061   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-20220601112852-16804 --memory=2600 --driver=docker : (24.738924971s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:110: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe2485945553 run --minikube-profile skaffold-20220601112852-16804 --kube-context skaffold-20220601112852-16804 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:110: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe2485945553 run --minikube-profile skaffold-20220601112852-16804 --kube-context skaffold-20220601112852-16804 --status-check=true --port-forward=false --interactive=false: (19.571662581s)
skaffold_test.go:116: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:342: "leeroy-app-d8bf46d76-dpj9r" [aa1376b5-19c5-43bd-9147-faaaf1abefb4] Running
skaffold_test.go:116: (dbg) TestSkaffold: app=leeroy-app healthy within 5.013516031s
skaffold_test.go:119: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:342: "leeroy-web-5d99cfb7fd-kr2qm" [cc81240c-5eb2-411f-b7eb-638ff7564ee9] Running
skaffold_test.go:119: (dbg) TestSkaffold: app=leeroy-web healthy within 5.006145471s
helpers_test.go:175: Cleaning up "skaffold-20220601112852-16804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-20220601112852-16804
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-20220601112852-16804: (3.034286519s)
--- PASS: TestSkaffold (58.77s)

                                                
                                    
x
+
TestInsufficientStorage (13.42s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-20220601112951-16804 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-20220601112951-16804 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (9.938453401s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1a34dfb3-6cfe-4b54-9d23-e15de068dafb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220601112951-16804] minikube v1.26.0-beta.1 on Darwin 12.4","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e051afc1-d692-477a-af01-7717009b5fe2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14079"}}
	{"specversion":"1.0","id":"88fab1e5-085e-475e-b002-38e5bcf2e6d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig"}}
	{"specversion":"1.0","id":"8f3e9570-1c68-4a1a-a752-d33aad5a26fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"15a0aaf9-3dd5-4e2d-876b-3519a0d83919","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f68e94ee-3abe-4451-9890-efcaa3fdfa55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube"}}
	{"specversion":"1.0","id":"4818021c-3d16-459f-93d3-1ca939a8b939","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"6f5a2132-a7a5-4b23-aa13-85252dfc6d42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"bfb31091-1a9c-4a6c-a719-054c9ab8ca08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"42df77fb-7e2e-45ad-bd49-1cfcdd85bca8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with the root privilege"}}
	{"specversion":"1.0","id":"6da70352-a016-43f5-9b1f-39307f22fca6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220601112951-16804 in cluster insufficient-storage-20220601112951-16804","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"22309102-2a27-4b9a-be43-76f4cced7857","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"fd8a41a8-e3db-4d04-881e-66223b43171c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"ffdabec4-e3f3-4df1-9422-39c74b5ec3e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20220601112951-16804 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20220601112951-16804 --output=json --layout=cluster: exit status 7 (502.54919ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220601112951-16804","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.26.0-beta.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220601112951-16804","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:30:01.732531   22765 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220601112951-16804" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20220601112951-16804 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20220601112951-16804 --output=json --layout=cluster: exit status 7 (436.649268ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220601112951-16804","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.26.0-beta.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220601112951-16804","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0601 11:30:02.171546   22781 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220601112951-16804" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	E0601 11:30:02.180494   22781 status.go:557] unable to read event log: stat: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/insufficient-storage-20220601112951-16804/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-20220601112951-16804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-20220601112951-16804
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-20220601112951-16804: (2.54037178s)
--- PASS: TestInsufficientStorage (13.42s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (6.26s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.26.0-beta.1 on darwin
- MINIKUBE_LOCATION=14079
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1697937346/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1697937346/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1697937346/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1697937346/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (6.26s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (8.53s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.26.0-beta.1 on darwin
- MINIKUBE_LOCATION=14079
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current728503725/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current728503725/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current728503725/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current728503725/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (8.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                    
x
+
TestPause/serial/Start (48.18s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20220601113830-16804 --memory=2048 --install-addons=false --wait=all --driver=docker 

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20220601113830-16804 --memory=2048 --install-addons=false --wait=all --driver=docker : (48.175023929s)
--- PASS: TestPause/serial/Start (48.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-20220601113821-16804
version_upgrade_test.go:213: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-20220601113821-16804: (3.705699961s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220601113904-16804 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-20220601113904-16804 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (629.663022ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220601113904-16804] minikube v1.26.0-beta.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14079
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (25.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220601113904-16804 --driver=docker 
E0601 11:39:07.880881   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220601113904-16804 --driver=docker : (25.074051286s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-20220601113904-16804 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (25.53s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.61s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20220601113830-16804 --alsologtostderr -v=1 --driver=docker 
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20220601113830-16804 --alsologtostderr -v=1 --driver=docker : (6.599154177s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.61s)

                                                
                                    
x
+
TestPause/serial/Pause (0.83s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-20220601113830-16804 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220601113904-16804 --no-kubernetes --driver=docker 
E0601 11:39:38.198570   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601112852-16804/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220601113904-16804 --no-kubernetes --driver=docker : (13.919335466s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-20220601113904-16804 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-20220601113904-16804 status -o json: exit status 2 (450.944424ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220601113904-16804","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-20220601113904-16804
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-20220601113904-16804: (2.70791266s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220601113904-16804 --no-kubernetes --driver=docker 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220601113904-16804 --no-kubernetes --driver=docker : (6.422340256s)
--- PASS: TestNoKubernetes/serial/Start (6.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20220601113904-16804 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20220601113904-16804 "sudo systemctl is-active --quiet service kubelet": exit status 1 (418.397295ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (33.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list

                                                
                                                
=== CONT  TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-amd64 profile list: (16.407254904s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json

                                                
                                                
=== CONT  TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-amd64 profile list --output=json: (16.72791754s)
--- PASS: TestNoKubernetes/serial/ProfileList (33.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-20220601113904-16804

                                                
                                                
=== CONT  TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-20220601113904-16804: (1.649467407s)
--- PASS: TestNoKubernetes/serial/Stop (1.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (4.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220601113904-16804 --driver=docker 

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220601113904-16804 --driver=docker : (4.529796656s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (4.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (286.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-20220601113004-16804 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p auto-20220601113004-16804 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker : (4m46.618924654s)
--- PASS: TestNetworkPlugins/group/auto/Start (286.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20220601113904-16804 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20220601113904-16804 "sudo systemctl is-active --quiet service kubelet": exit status 1 (442.382609ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (46.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-20220601113005-16804 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker 
E0601 11:41:17.661471   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-20220601113005-16804 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker : (46.5520458s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (46.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-hsk4h" [a5ef4d5d-0ffe-467a-8117-ec8d9ce9242b] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.013922325s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-20220601113005-16804 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-20220601113005-16804 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context kindnet-20220601113005-16804 replace --force -f testdata/netcat-deployment.yaml: (1.626023892s)
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-xqb6m" [63725d81-be6d-43f3-b836-5be070918e5f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-xqb6m" [63725d81-be6d-43f3-b836-5be070918e5f] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.008916144s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220601113005-16804 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-20220601113005-16804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-20220601113005-16804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (78.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p cilium-20220601113006-16804 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker 
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p cilium-20220601113006-16804 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker : (1m18.760186736s)
--- PASS: TestNetworkPlugins/group/cilium/Start (78.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-7q9k2" [3b7b352d-40d9-43f7-b4f2-8366cad0b146] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.016167785s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cilium-20220601113006-16804 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (12.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-20220601113006-16804 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context cilium-20220601113006-16804 replace --force -f testdata/netcat-deployment.yaml: (2.077375138s)
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-q56t9" [008a53d9-1ee2-490d-9d3d-c0f4fb38adf5] Pending
helpers_test.go:342: "netcat-668db85669-q56t9" [008a53d9-1ee2-490d-9d3d-c0f4fb38adf5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0601 11:43:14.590703   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
helpers_test.go:342: "netcat-668db85669-q56t9" [008a53d9-1ee2-490d-9d3d-c0f4fb38adf5] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 10.006176793s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (12.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-20220601113006-16804 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-20220601113006-16804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-20220601113006-16804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (67.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-20220601113006-16804 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker 
E0601 11:44:07.879371   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p calico-20220601113006-16804 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker : (1m7.981826914s)
--- PASS: TestNetworkPlugins/group/calico/Start (67.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:342: "calico-node-qcg8w" [171380fe-9b59-462a-9b25-2f82a4dad9e3] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.018644575s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-20220601113006-16804 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context calico-20220601113006-16804 replace --force -f testdata/netcat-deployment.yaml
E0601 11:44:38.193512   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601112852-16804/client.crt: no such file or directory
net_test.go:138: (dbg) Done: kubectl --context calico-20220601113006-16804 replace --force -f testdata/netcat-deployment.yaml: (2.253738289s)
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-mmhfr" [1375d938-1024-4a96-9129-1a077b5b720a] Pending
helpers_test.go:342: "netcat-668db85669-mmhfr" [1375d938-1024-4a96-9129-1a077b5b720a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-mmhfr" [1375d938-1024-4a96-9129-1a077b5b720a] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.009681273s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Run:  kubectl --context calico-20220601113006-16804 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:188: (dbg) Run:  kubectl --context calico-20220601113006-16804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:238: (dbg) Run:  kubectl --context calico-20220601113006-16804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (77.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p false-20220601113005-16804 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p false-20220601113005-16804 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker : (1m17.491341892s)
--- PASS: TestNetworkPlugins/group/false/Start (77.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-20220601113004-16804 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-20220601113004-16804 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context auto-20220601113004-16804 replace --force -f testdata/netcat-deployment.yaml: (1.679920685s)
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-n74gw" [2eb4a210-046f-446d-afee-491d00d0bb57] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-n74gw" [2eb4a210-046f-446d-afee-491d00d0bb57] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.00885842s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220601113004-16804 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-20220601113004-16804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-20220601113004-16804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context auto-20220601113004-16804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.107880337s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (40.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-20220601113004-16804 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-20220601113004-16804 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker : (40.885274062s)
--- PASS: TestNetworkPlugins/group/bridge/Start (40.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-20220601113005-16804 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (13.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context false-20220601113005-16804 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context false-20220601113005-16804 replace --force -f testdata/netcat-deployment.yaml: (1.993764012s)
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-c866h" [20086743-f325-429b-a9d8-a16227832820] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-c866h" [20086743-f325-429b-a9d8-a16227832820] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.006323096s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (13.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-20220601113004-16804 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-20220601113004-16804 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context bridge-20220601113004-16804 replace --force -f testdata/netcat-deployment.yaml: (1.662077721s)
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-wqfhh" [e671f9f2-c9b4-4da8-ad59-12da25607501] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0601 11:46:23.706717   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601113005-16804/client.crt: no such file or directory
E0601 11:46:23.711839   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601113005-16804/client.crt: no such file or directory
E0601 11:46:23.722482   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601113005-16804/client.crt: no such file or directory
E0601 11:46:23.742905   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601113005-16804/client.crt: no such file or directory
E0601 11:46:23.783067   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601113005-16804/client.crt: no such file or directory
E0601 11:46:23.863178   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601113005-16804/client.crt: no such file or directory
E0601 11:46:24.023423   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601113005-16804/client.crt: no such file or directory
E0601 11:46:24.343522   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601113005-16804/client.crt: no such file or directory
E0601 11:46:24.983668   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601113005-16804/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
helpers_test.go:342: "netcat-668db85669-wqfhh" [e671f9f2-c9b4-4da8-ad59-12da25607501] Running
E0601 11:46:28.827277   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601113005-16804/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.012270753s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220601113005-16804 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:188: (dbg) Run:  kubectl --context false-20220601113005-16804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:238: (dbg) Run:  kubectl --context false-20220601113005-16804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0601 11:46:26.265900   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601113005-16804/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context false-20220601113005-16804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.120911904s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220601113004-16804 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:188: (dbg) Run:  kubectl --context bridge-20220601113004-16804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:238: (dbg) Run:  kubectl --context bridge-20220601113004-16804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (248.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-20220601113004-16804 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker 
E0601 11:46:33.950608   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601113005-16804/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-20220601113004-16804 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker : (4m8.817961394s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (248.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (76.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-20220601113004-16804 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker 
E0601 11:46:44.192748   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601113005-16804/client.crt: no such file or directory
E0601 11:47:04.673592   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601113005-16804/client.crt: no such file or directory
E0601 11:47:45.635320   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601113005-16804/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-20220601113004-16804 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker : (1m16.21039376s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (76.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-20220601113004-16804 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kubenet-20220601113004-16804 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context kubenet-20220601113004-16804 replace --force -f testdata/netcat-deployment.yaml: (1.638902843s)
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-5j4tg" [33695fd0-a6e3-4b6c-8977-79ff0c3b623f] Pending
helpers_test.go:342: "netcat-668db85669-5j4tg" [33695fd0-a6e3-4b6c-8977-79ff0c3b623f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-5j4tg" [33695fd0-a6e3-4b6c-8977-79ff0c3b623f] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 9.009867184s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220601113004-16804 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kubenet-20220601113004-16804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220601113004-16804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-20220601113004-16804 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-20220601113004-16804 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context enable-default-cni-20220601113004-16804 replace --force -f testdata/netcat-deployment.yaml: (1.600518785s)
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-gdmhl" [6f72771c-823a-4c03-b52f-672561023c29] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0601 11:50:47.651459   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601113006-16804/client.crt: no such file or directory
helpers_test.go:342: "netcat-668db85669-gdmhl" [6f72771c-823a-4c03-b52f-672561023c29] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.005872971s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220601113004-16804 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:188: (dbg) Run:  kubectl --context enable-default-cni-20220601113004-16804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0601 11:50:54.574787   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601113006-16804/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:238: (dbg) Run:  kubectl --context enable-default-cni-20220601113004-16804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (50.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20220601115057-16804 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.23.6
E0601 11:51:01.251616   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601112852-16804/client.crt: no such file or directory
E0601 11:51:01.691812   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601113004-16804/client.crt: no such file or directory
E0601 11:51:14.003650   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601113005-16804/client.crt: no such file or directory
E0601 11:51:14.008873   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601113005-16804/client.crt: no such file or directory
E0601 11:51:14.020122   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601113005-16804/client.crt: no such file or directory
E0601 11:51:14.040262   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601113005-16804/client.crt: no such file or directory
E0601 11:51:14.080800   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601113005-16804/client.crt: no such file or directory
E0601 11:51:14.160918   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601113005-16804/client.crt: no such file or directory
E0601 11:51:14.321059   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601113005-16804/client.crt: no such file or directory
E0601 11:51:14.641261   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601113005-16804/client.crt: no such file or directory
E0601 11:51:15.281559   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601113005-16804/client.crt: no such file or directory
E0601 11:51:16.561757   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601113005-16804/client.crt: no such file or directory
E0601 11:51:19.121923   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601113005-16804/client.crt: no such file or directory
E0601 11:51:22.098569   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601113004-16804/client.crt: no such file or directory
E0601 11:51:22.104417   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601113004-16804/client.crt: no such file or directory
E0601 11:51:22.115421   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601113004-16804/client.crt: no such file or directory
E0601 11:51:22.135760   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601113004-16804/client.crt: no such file or directory
E0601 11:51:22.175915   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601113004-16804/client.crt: no such file or directory
E0601 11:51:22.256317   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601113004-16804/client.crt: no such file or directory
E0601 11:51:22.416973   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601113004-16804/client.crt: no such file or directory
E0601 11:51:22.737231   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601113004-16804/client.crt: no such file or directory
E0601 11:51:23.378660   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601113004-16804/client.crt: no such file or directory
E0601 11:51:23.705244   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601113005-16804/client.crt: no such file or directory
E0601 11:51:24.242076   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601113005-16804/client.crt: no such file or directory
E0601 11:51:24.658783   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601113004-16804/client.crt: no such file or directory
E0601 11:51:27.220007   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601113004-16804/client.crt: no such file or directory
E0601 11:51:32.341270   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601113004-16804/client.crt: no such file or directory
E0601 11:51:34.482154   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601113005-16804/client.crt: no such file or directory
E0601 11:51:42.581375   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601113004-16804/client.crt: no such file or directory
E0601 11:51:42.651741   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601113004-16804/client.crt: no such file or directory
start_stop_delete_test.go:188: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-20220601115057-16804 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.23.6: (50.247645236s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (50.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context no-preload-20220601115057-16804 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) Done: kubectl --context no-preload-20220601115057-16804 create -f testdata/busybox.yaml: (1.622823458s)
start_stop_delete_test.go:198: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [84b4e664-fbed-4239-997f-21d595652b84] Pending
helpers_test.go:342: "busybox" [84b4e664-fbed-4239-997f-21d595652b84] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0601 11:51:51.394191   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601113005-16804/client.crt: no such file or directory
helpers_test.go:342: "busybox" [84b4e664-fbed-4239-997f-21d595652b84] Running
E0601 11:51:54.962209   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601113005-16804/client.crt: no such file or directory
start_stop_delete_test.go:198: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.012135248s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context no-preload-20220601115057-16804 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-20220601115057-16804 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context no-preload-20220601115057-16804 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-20220601115057-16804 --alsologtostderr -v=3
E0601 11:52:03.063650   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601113004-16804/client.crt: no such file or directory
start_stop_delete_test.go:230: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-20220601115057-16804 --alsologtostderr -v=3: (12.6544672s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220601115057-16804 -n no-preload-20220601115057-16804
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220601115057-16804 -n no-preload-20220601115057-16804: exit status 7 (118.358652ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-20220601115057-16804 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (336.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20220601115057-16804 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-20220601115057-16804 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.23.6: (5m36.269020123s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220601115057-16804 -n no-preload-20220601115057-16804
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (336.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-20220601114806-16804 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-20220601114806-16804 --alsologtostderr -v=3: (1.661034726s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220601114806-16804 -n old-k8s-version-20220601114806-16804
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220601114806-16804 -n old-k8s-version-20220601114806-16804: exit status 7 (124.098167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-20220601114806-16804 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-rnkpm" [9a79845e-efd3-46b7-80bb-0c7309ca22ca] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0601 11:57:54.129303   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kubenet-20220601113004-16804/client.crt: no such file or directory
helpers_test.go:342: "kubernetes-dashboard-8469778f77-rnkpm" [9a79845e-efd3-46b7-80bb-0c7309ca22ca] Running
E0601 11:57:57.654343   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601110131-16804/client.crt: no such file or directory
start_stop_delete_test.go:276: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.013829358s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-rnkpm" [9a79845e-efd3-46b7-80bb-0c7309ca22ca] Running
E0601 11:58:03.737751   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/cilium-20220601113006-16804/client.crt: no such file or directory
start_stop_delete_test.go:289: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00824409s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context no-preload-20220601115057-16804 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:293: (dbg) Done: kubectl --context no-preload-20220601115057-16804 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.647382325s)
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-20220601115057-16804 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (40.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20220601115855-16804 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.23.6
E0601 11:59:07.871711   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601105739-16804/client.crt: no such file or directory
E0601 11:59:32.637164   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/calico-20220601113006-16804/client.crt: no such file or directory
start_stop_delete_test.go:188: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-20220601115855-16804 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.23.6: (40.974907012s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (40.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220601115855-16804 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) Done: kubectl --context embed-certs-20220601115855-16804 create -f testdata/busybox.yaml: (1.611858077s)
start_stop_delete_test.go:198: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [06c39e6b-766f-44a0-9f17-cb4235186a0d] Pending
E0601 11:59:38.186060   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/skaffold-20220601112852-16804/client.crt: no such file or directory
helpers_test.go:342: "busybox" [06c39e6b-766f-44a0-9f17-cb4235186a0d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [06c39e6b-766f-44a0-9f17-cb4235186a0d] Running
start_stop_delete_test.go:198: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.016649891s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220601115855-16804 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-20220601115855-16804 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context embed-certs-20220601115855-16804 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-20220601115855-16804 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-20220601115855-16804 --alsologtostderr -v=3: (12.579481842s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220601115855-16804 -n embed-certs-20220601115855-16804
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220601115855-16804 -n embed-certs-20220601115855-16804: exit status 7 (119.79412ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-20220601115855-16804 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (338.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20220601115855-16804 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.23.6
E0601 12:00:20.697037   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601113004-16804/client.crt: no such file or directory
E0601 12:00:44.415043   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601113004-16804/client.crt: no such file or directory
E0601 12:01:12.108463   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601113004-16804/client.crt: no such file or directory
E0601 12:01:13.997192   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601113005-16804/client.crt: no such file or directory
E0601 12:01:22.091314   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/bridge-20220601113004-16804/client.crt: no such file or directory
E0601 12:01:23.698928   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/kindnet-20220601113005-16804/client.crt: no such file or directory
E0601 12:01:49.640206   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601115057-16804/client.crt: no such file or directory
E0601 12:01:49.646639   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601115057-16804/client.crt: no such file or directory
E0601 12:01:49.658028   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601115057-16804/client.crt: no such file or directory
E0601 12:01:49.680245   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601115057-16804/client.crt: no such file or directory
E0601 12:01:49.722422   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601115057-16804/client.crt: no such file or directory
E0601 12:01:49.804418   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601115057-16804/client.crt: no such file or directory
E0601 12:01:49.964843   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601115057-16804/client.crt: no such file or directory
E0601 12:01:50.285622   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601115057-16804/client.crt: no such file or directory
E0601 12:01:50.927981   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601115057-16804/client.crt: no such file or directory
E0601 12:01:52.209280   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601115057-16804/client.crt: no such file or directory
E0601 12:01:54.771700   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601115057-16804/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-20220601115855-16804 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.23.6: (5m37.923159479s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220601115855-16804 -n embed-certs-20220601115855-16804
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (338.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (7.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-n4ksx" [6b9cf5f5-152e-49cb-9646-876836323cd4] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-8469778f77-n4ksx" [6b9cf5f5-152e-49cb-9646-876836323cd4] Running
E0601 12:05:44.411671   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/enable-default-cni-20220601113004-16804/client.crt: no such file or directory
start_stop_delete_test.go:276: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.0267176s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (7.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-n4ksx" [6b9cf5f5-152e-49cb-9646-876836323cd4] Running
start_stop_delete_test.go:289: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007919918s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context embed-certs-20220601115855-16804 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:293: (dbg) Done: kubectl --context embed-certs-20220601115855-16804 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.634652621s)
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-20220601115855-16804 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (41.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20220601120641-16804 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.23.6
E0601 12:06:43.764959   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601113004-16804/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-different-port-20220601120641-16804 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.23.6: (41.1729274s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (41.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (10.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220601120641-16804 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) Done: kubectl --context default-k8s-different-port-20220601120641-16804 create -f testdata/busybox.yaml: (1.596719154s)
start_stop_delete_test.go:198: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [ec08b844-95c4-499e-96b1-7c128177afec] Pending
helpers_test.go:342: "busybox" [ec08b844-95c4-499e-96b1-7c128177afec] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [ec08b844-95c4-499e-96b1-7c128177afec] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:198: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 9.014784218s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220601120641-16804 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (10.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-different-port-20220601120641-16804 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context default-k8s-different-port-20220601120641-16804 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (12.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-different-port-20220601120641-16804 --alsologtostderr -v=3
E0601 12:07:37.034548   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/false-20220601113005-16804/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:230: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-different-port-20220601120641-16804 --alsologtostderr -v=3: (12.65611734s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (12.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220601120641-16804 -n default-k8s-different-port-20220601120641-16804
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220601120641-16804 -n default-k8s-different-port-20220601120641-16804: exit status 7 (127.436782ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-different-port-20220601120641-16804 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (336.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20220601120641-16804 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-different-port-20220601120641-16804 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.23.6: (5m35.525757708s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220601120641-16804 -n default-k8s-different-port-20220601120641-16804
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (336.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (8.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-rhjjp" [4338b89b-deb5-472b-b3a2-e8316af44b6a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-8469778f77-rhjjp" [4338b89b-deb5-472b-b3a2-e8316af44b6a] Running
start_stop_delete_test.go:276: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.014107155s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (8.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (6.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-rhjjp" [4338b89b-deb5-472b-b3a2-e8316af44b6a] Running
start_stop_delete_test.go:289: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008724162s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context default-k8s-different-port-20220601120641-16804 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:293: (dbg) Done: kubectl --context default-k8s-different-port-20220601120641-16804 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.58382382s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (6.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-different-port-20220601120641-16804 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (38.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20220601121425-16804 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-20220601121425-16804 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.23.6: (38.381324426s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (38.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-20220601121425-16804 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:213: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-20220601121425-16804 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:230: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-20220601121425-16804 --alsologtostderr -v=3: (12.735444829s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220601121425-16804 -n newest-cni-20220601121425-16804
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220601121425-16804 -n newest-cni-20220601121425-16804: exit status 7 (122.585467ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-20220601121425-16804 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20220601121425-16804 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.23.6
E0601 12:15:20.700624   16804 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14079-15668-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/auto-20220601113004-16804/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-20220601121425-16804 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.23.6: (17.392438333s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220601121425-16804 -n newest-cni-20220601121425-16804
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:286: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-20220601121425-16804 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.48s)

                                                
                                    

Test skip (18/288)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.6/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.6/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:280: registry stabilized in 14.140411ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-pcqd7" [fb7947e7-9695-4935-bb24-e8cf960958a3] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.011883283s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-9g4hx" [94ceec23-1f67-4fa9-b8dd-098e8bf0d1cf] Running
addons_test.go:285: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009218447s
addons_test.go:290: (dbg) Run:  kubectl --context addons-20220601105739-16804 delete po -l run=registry-test --now
addons_test.go:295: (dbg) Run:  kubectl --context addons-20220601105739-16804 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: (dbg) Done: kubectl --context addons-20220601105739-16804 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.667856279s)
addons_test.go:305: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (14.75s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (12.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:162: (dbg) Run:  kubectl --context addons-20220601105739-16804 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:182: (dbg) Run:  kubectl --context addons-20220601105739-16804 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:195: (dbg) Run:  kubectl --context addons-20220601105739-16804 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [56a6c7f6-f8b8-481a-a725-8ab743ff5ca5] Pending
helpers_test.go:342: "nginx" [56a6c7f6-f8b8-481a-a725-8ab743ff5ca5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [56a6c7f6-f8b8-481a-a725-8ab743ff5ca5] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.010382485s
addons_test.go:212: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220601105739-16804 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:232: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (12.23s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:448: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20220601110131-16804 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1564: (dbg) Run:  kubectl --context functional-20220601110131-16804 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-74cf8bc446-dps67" [b98930bb-4243-4637-a323-b2d5d233bd78] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-74cf8bc446-dps67" [b98930bb-4243-4637-a323-b2d5d233bd78] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.010113357s
functional_test.go:1575: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (12.17s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:542: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220601113004-16804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p flannel-20220601113004-16804
--- SKIP: TestNetworkPlugins/group/flannel (0.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-20220601113005-16804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-flannel-20220601113005-16804
--- SKIP: TestNetworkPlugins/group/custom-flannel (0.58s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.59s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:105: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220601120640-16804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-20220601120640-16804
--- SKIP: TestStartStop/group/disable-driver-mounts (0.59s)

                                                
                                    
Copied to clipboard